2025-08-29 16:35:46.216689 | Job console starting 2025-08-29 16:35:46.229341 | Updating git repos 2025-08-29 16:35:46.862026 | Cloning repos into workspace 2025-08-29 16:35:47.148451 | Restoring repo states 2025-08-29 16:35:47.169935 | Merging changes 2025-08-29 16:35:47.169957 | Checking out repos 2025-08-29 16:35:47.684595 | Preparing playbooks 2025-08-29 16:35:48.384492 | Running Ansible setup 2025-08-29 16:35:52.937220 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-08-29 16:35:53.687029 | 2025-08-29 16:35:53.687203 | PLAY [Base pre] 2025-08-29 16:35:53.704437 | 2025-08-29 16:35:53.704568 | TASK [Setup log path fact] 2025-08-29 16:35:53.742433 | orchestrator | ok 2025-08-29 16:35:53.762894 | 2025-08-29 16:35:53.763034 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-08-29 16:35:53.806297 | orchestrator | ok 2025-08-29 16:35:53.820511 | 2025-08-29 16:35:53.820623 | TASK [emit-job-header : Print job information] 2025-08-29 16:35:53.875699 | # Job Information 2025-08-29 16:35:53.875943 | Ansible Version: 2.16.14 2025-08-29 16:35:53.876005 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-08-29 16:35:53.876062 | Pipeline: post 2025-08-29 16:35:53.876103 | Executor: 521e9411259a 2025-08-29 16:35:53.876139 | Triggered by: https://github.com/osism/testbed/commit/ef698e79bb91e5c1a90863c5f63ed3556fcd4722 2025-08-29 16:35:53.876178 | Event ID: 7e34c700-84e1-11f0-9853-a2f38d9dedb1 2025-08-29 16:35:53.886277 | 2025-08-29 16:35:53.886410 | LOOP [emit-job-header : Print node information] 2025-08-29 16:35:54.017830 | orchestrator | ok: 2025-08-29 16:35:54.018111 | orchestrator | # Node Information 2025-08-29 16:35:54.018161 | orchestrator | Inventory Hostname: orchestrator 2025-08-29 16:35:54.018198 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-08-29 16:35:54.018230 | orchestrator | Username: zuul-testbed06 2025-08-29 16:35:54.018259 | orchestrator | Distro: Debian 12.11 2025-08-29 16:35:54.018344 | orchestrator | Provider: static-testbed 2025-08-29 16:35:54.018375 | orchestrator | Region: 2025-08-29 16:35:54.018406 | orchestrator | Label: testbed-orchestrator 2025-08-29 16:35:54.018434 | orchestrator | Product Name: OpenStack Nova 2025-08-29 16:35:54.018462 | orchestrator | Interface IP: 81.163.193.140 2025-08-29 16:35:54.047507 | 2025-08-29 16:35:54.047665 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-08-29 16:35:54.539916 | orchestrator -> localhost | changed 2025-08-29 16:35:54.564525 | 2025-08-29 16:35:54.564867 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-08-29 16:35:55.669010 | orchestrator -> localhost | changed 2025-08-29 16:35:55.693149 | 2025-08-29 16:35:55.693344 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-08-29 16:35:55.991330 | orchestrator -> localhost | ok 2025-08-29 16:35:55.999317 | 2025-08-29 16:35:55.999456 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-08-29 16:35:56.036396 | orchestrator | ok 2025-08-29 16:35:56.057396 | orchestrator | included: /var/lib/zuul/builds/1c639c5fc7074b538a16b20865708ec8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-08-29 16:35:56.069709 | 2025-08-29 16:35:56.069825 | TASK [add-build-sshkey : Create Temp SSH key] 2025-08-29 16:35:57.756880 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-08-29 16:35:57.757384 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/1c639c5fc7074b538a16b20865708ec8/work/1c639c5fc7074b538a16b20865708ec8_id_rsa 2025-08-29 16:35:57.757490 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/1c639c5fc7074b538a16b20865708ec8/work/1c639c5fc7074b538a16b20865708ec8_id_rsa.pub 2025-08-29 16:35:57.757564 | orchestrator -> localhost | The key fingerprint is: 2025-08-29 16:35:57.757659 | orchestrator -> localhost | SHA256:oXxjILm7F3mevEqPaoRI/JqlfwXaq2J0pbi62i0Qjqo zuul-build-sshkey 2025-08-29 16:35:57.757727 | orchestrator -> localhost | The key's randomart image is: 2025-08-29 16:35:57.757814 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-08-29 16:35:57.757876 | orchestrator -> localhost | | | 2025-08-29 16:35:57.757937 | orchestrator -> localhost | | . | 2025-08-29 16:35:57.757995 | orchestrator -> localhost | |. o . . | 2025-08-29 16:35:57.758049 | orchestrator -> localhost | |.o * o . | 2025-08-29 16:35:57.758103 | orchestrator -> localhost | |+oo.* +.S | 2025-08-29 16:35:57.758171 | orchestrator -> localhost | |++.*.oo+.. | 2025-08-29 16:35:57.758228 | orchestrator -> localhost | |o.B.. += . | 2025-08-29 16:35:57.758318 | orchestrator -> localhost | |.O...=.o+ | 2025-08-29 16:35:57.758384 | orchestrator -> localhost | |E.+=*oo.o. | 2025-08-29 16:35:57.758442 | orchestrator -> localhost | +----[SHA256]-----+ 2025-08-29 16:35:57.758582 | orchestrator -> localhost | ok: Runtime: 0:00:01.180709 2025-08-29 16:35:57.774266 | 2025-08-29 16:35:57.774480 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-08-29 16:35:57.814215 | orchestrator | ok 2025-08-29 16:35:57.827887 | orchestrator | included: /var/lib/zuul/builds/1c639c5fc7074b538a16b20865708ec8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-08-29 16:35:57.838654 | 2025-08-29 16:35:57.838760 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-08-29 16:35:57.863472 | orchestrator | skipping: Conditional result was False 2025-08-29 16:35:57.872669 | 2025-08-29 16:35:57.872777 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-08-29 16:35:58.453370 | orchestrator | changed 2025-08-29 16:35:58.465655 | 2025-08-29 16:35:58.465807 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-08-29 16:35:58.771196 | orchestrator | ok 2025-08-29 16:35:58.782360 | 2025-08-29 16:35:58.782510 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-08-29 16:35:59.198520 | orchestrator | ok 2025-08-29 16:35:59.207547 | 2025-08-29 16:35:59.207687 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-08-29 16:35:59.655456 | orchestrator | ok 2025-08-29 16:35:59.665953 | 2025-08-29 16:35:59.666098 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-08-29 16:35:59.692300 | orchestrator | skipping: Conditional result was False 2025-08-29 16:35:59.704974 | 2025-08-29 16:35:59.705126 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-08-29 16:36:00.165035 | orchestrator -> localhost | changed 2025-08-29 16:36:00.180781 | 2025-08-29 16:36:00.180926 | TASK [add-build-sshkey : Add back temp key] 2025-08-29 16:36:00.549597 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/1c639c5fc7074b538a16b20865708ec8/work/1c639c5fc7074b538a16b20865708ec8_id_rsa (zuul-build-sshkey) 2025-08-29 16:36:00.549879 | orchestrator -> localhost | ok: Runtime: 0:00:00.011152 2025-08-29 16:36:00.557584 | 2025-08-29 16:36:00.557701 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-08-29 16:36:00.999693 | orchestrator | ok 2025-08-29 16:36:01.009187 | 2025-08-29 16:36:01.009392 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-08-29 16:36:01.044707 | orchestrator | skipping: Conditional result was False 2025-08-29 16:36:01.124455 | 2025-08-29 16:36:01.124608 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-08-29 16:36:01.529149 | orchestrator | ok 2025-08-29 16:36:01.541578 | 2025-08-29 16:36:01.541704 | TASK [validate-host : Define zuul_info_dir fact] 2025-08-29 16:36:01.582060 | orchestrator | ok 2025-08-29 16:36:01.589398 | 2025-08-29 16:36:01.589503 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-08-29 16:36:01.918175 | orchestrator -> localhost | ok 2025-08-29 16:36:01.935764 | 2025-08-29 16:36:01.935924 | TASK [validate-host : Collect information about the host] 2025-08-29 16:36:03.218442 | orchestrator | ok 2025-08-29 16:36:03.235622 | 2025-08-29 16:36:03.235733 | TASK [validate-host : Sanitize hostname] 2025-08-29 16:36:03.303702 | orchestrator | ok 2025-08-29 16:36:03.313611 | 2025-08-29 16:36:03.313745 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-08-29 16:36:03.887260 | orchestrator -> localhost | changed 2025-08-29 16:36:03.901563 | 2025-08-29 16:36:03.901722 | TASK [validate-host : Collect information about zuul worker] 2025-08-29 16:36:04.348403 | orchestrator | ok 2025-08-29 16:36:04.354388 | 2025-08-29 16:36:04.354507 | TASK [validate-host : Write out all zuul information for each host] 2025-08-29 16:36:04.947246 | orchestrator -> localhost | changed 2025-08-29 16:36:04.967939 | 2025-08-29 16:36:04.968104 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-08-29 16:36:05.278020 | orchestrator | ok 2025-08-29 16:36:05.288074 | 2025-08-29 16:36:05.288212 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-08-29 16:36:25.088353 | orchestrator | changed: 2025-08-29 16:36:25.088666 | orchestrator | .d..t...... src/ 2025-08-29 16:36:25.088717 | orchestrator | .d..t...... src/github.com/ 2025-08-29 16:36:25.088754 | orchestrator | .d..t...... src/github.com/osism/ 2025-08-29 16:36:25.088786 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-08-29 16:36:25.088816 | orchestrator | RedHat.yml 2025-08-29 16:36:25.104639 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-08-29 16:36:25.104661 | orchestrator | RedHat.yml 2025-08-29 16:36:25.104718 | orchestrator | = 2.2.0"... 2025-08-29 16:36:42.681372 | orchestrator | 16:36:42.681 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-08-29 16:36:42.708898 | orchestrator | 16:36:42.708 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-08-29 16:36:43.230688 | orchestrator | 16:36:43.230 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-08-29 16:36:43.900318 | orchestrator | 16:36:43.900 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-08-29 16:36:43.978170 | orchestrator | 16:36:43.977 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-08-29 16:36:44.580804 | orchestrator | 16:36:44.580 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-08-29 16:36:45.050830 | orchestrator | 16:36:45.050 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-08-29 16:36:45.883985 | orchestrator | 16:36:45.883 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-08-29 16:36:45.884065 | orchestrator | 16:36:45.883 STDOUT terraform: Providers are signed by their developers. 2025-08-29 16:36:45.884072 | orchestrator | 16:36:45.883 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-08-29 16:36:45.884077 | orchestrator | 16:36:45.883 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-08-29 16:36:45.884093 | orchestrator | 16:36:45.883 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-08-29 16:36:45.884106 | orchestrator | 16:36:45.883 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-08-29 16:36:45.884160 | orchestrator | 16:36:45.884 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-08-29 16:36:45.884191 | orchestrator | 16:36:45.884 STDOUT terraform: you run "tofu init" in the future. 2025-08-29 16:36:45.884283 | orchestrator | 16:36:45.884 STDOUT terraform: OpenTofu has been successfully initialized! 2025-08-29 16:36:45.884336 | orchestrator | 16:36:45.884 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-08-29 16:36:45.884417 | orchestrator | 16:36:45.884 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-08-29 16:36:45.884458 | orchestrator | 16:36:45.884 STDOUT terraform: should now work. 2025-08-29 16:36:45.884536 | orchestrator | 16:36:45.884 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-08-29 16:36:45.884617 | orchestrator | 16:36:45.884 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-08-29 16:36:45.884685 | orchestrator | 16:36:45.884 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-08-29 16:36:46.014326 | orchestrator | 16:36:46.014 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-08-29 16:36:46.014407 | orchestrator | 16:36:46.014 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-08-29 16:36:46.238824 | orchestrator | 16:36:46.238 STDOUT terraform: Created and switched to workspace "ci"! 2025-08-29 16:36:46.238968 | orchestrator | 16:36:46.238 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-08-29 16:36:46.239028 | orchestrator | 16:36:46.238 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-08-29 16:36:46.239064 | orchestrator | 16:36:46.239 STDOUT terraform: for this configuration. 2025-08-29 16:36:46.404621 | orchestrator | 16:36:46.404 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-08-29 16:36:46.404700 | orchestrator | 16:36:46.404 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-08-29 16:36:46.529941 | orchestrator | 16:36:46.529 STDOUT terraform: ci.auto.tfvars 2025-08-29 16:36:46.535493 | orchestrator | 16:36:46.535 STDOUT terraform: default_custom.tf 2025-08-29 16:36:46.660298 | orchestrator | 16:36:46.660 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-08-29 16:36:47.690566 | orchestrator | 16:36:47.690 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-08-29 16:36:48.234080 | orchestrator | 16:36:48.233 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-08-29 16:36:48.487735 | orchestrator | 16:36:48.487 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-08-29 16:36:48.487804 | orchestrator | 16:36:48.487 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-08-29 16:36:48.487911 | orchestrator | 16:36:48.487 STDOUT terraform:  + create 2025-08-29 16:36:48.487971 | orchestrator | 16:36:48.487 STDOUT terraform:  <= read (data resources) 2025-08-29 16:36:48.488061 | orchestrator | 16:36:48.487 STDOUT terraform: OpenTofu will perform the following actions: 2025-08-29 16:36:48.488900 | orchestrator | 16:36:48.488 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-08-29 16:36:48.488920 | orchestrator | 16:36:48.488 STDOUT terraform:  # (config refers to values not yet known) 2025-08-29 16:36:48.488926 | orchestrator | 16:36:48.488 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-08-29 16:36:48.488931 | orchestrator | 16:36:48.488 STDOUT terraform:  + checksum = (known after apply) 2025-08-29 16:36:48.488935 | orchestrator | 16:36:48.488 STDOUT terraform:  + created_at = (known after apply) 2025-08-29 16:36:48.488939 | orchestrator | 16:36:48.488 STDOUT terraform:  + file = (known after apply) 2025-08-29 16:36:48.488944 | orchestrator | 16:36:48.488 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.488948 | orchestrator | 16:36:48.488 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.488967 | orchestrator | 16:36:48.488 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-08-29 16:36:48.488971 | orchestrator | 16:36:48.488 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-08-29 16:36:48.488975 | orchestrator | 16:36:48.488 STDOUT terraform:  + most_recent = true 2025-08-29 16:36:48.488980 | orchestrator | 16:36:48.488 STDOUT terraform:  + name = (known after apply) 2025-08-29 16:36:48.488984 | orchestrator | 16:36:48.488 STDOUT terraform:  + protected = (known after apply) 2025-08-29 16:36:48.488989 | orchestrator | 16:36:48.488 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.488996 | orchestrator | 16:36:48.488 STDOUT terraform:  + schema = (known after apply) 2025-08-29 16:36:48.489001 | orchestrator | 16:36:48.488 STDOUT terraform:  + size_bytes = (known after apply) 2025-08-29 16:36:48.489007 | orchestrator | 16:36:48.488 STDOUT terraform:  + tags = (known after apply) 2025-08-29 16:36:48.489012 | orchestrator | 16:36:48.488 STDOUT terraform:  + updated_at = (known after apply) 2025-08-29 16:36:48.489019 | orchestrator | 16:36:48.488 STDOUT terraform:  } 2025-08-29 16:36:48.489693 | orchestrator | 16:36:48.489 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-08-29 16:36:48.489705 | orchestrator | 16:36:48.489 STDOUT terraform:  # (config refers to values not yet known) 2025-08-29 16:36:48.489710 | orchestrator | 16:36:48.489 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-08-29 16:36:48.489714 | orchestrator | 16:36:48.489 STDOUT terraform:  + checksum = (known after apply) 2025-08-29 16:36:48.489718 | orchestrator | 16:36:48.489 STDOUT terraform:  + created_at = (known after apply) 2025-08-29 16:36:48.489722 | orchestrator | 16:36:48.489 STDOUT terraform:  + file = (known after apply) 2025-08-29 16:36:48.489725 | orchestrator | 16:36:48.489 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.489729 | orchestrator | 16:36:48.489 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.489733 | orchestrator | 16:36:48.489 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-08-29 16:36:48.489736 | orchestrator | 16:36:48.489 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-08-29 16:36:48.489747 | orchestrator | 16:36:48.489 STDOUT terraform:  + most_recent = true 2025-08-29 16:36:48.489751 | orchestrator | 16:36:48.489 STDOUT terraform:  + name = (known after apply) 2025-08-29 16:36:48.489755 | orchestrator | 16:36:48.489 STDOUT terraform:  + protected = (known after apply) 2025-08-29 16:36:48.489758 | orchestrator | 16:36:48.489 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.489762 | orchestrator | 16:36:48.489 STDOUT terraform:  + schema = (known after apply) 2025-08-29 16:36:48.489766 | orchestrator | 16:36:48.489 STDOUT terraform:  + size_bytes = (known after apply) 2025-08-29 16:36:48.489769 | orchestrator | 16:36:48.489 STDOUT terraform:  + tags = (known after apply) 2025-08-29 16:36:48.489773 | orchestrator | 16:36:48.489 STDOUT terraform:  + updated_at = (known after apply) 2025-08-29 16:36:48.489777 | orchestrator | 16:36:48.489 STDOUT terraform:  } 2025-08-29 16:36:48.490391 | orchestrator | 16:36:48.489 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-08-29 16:36:48.490414 | orchestrator | 16:36:48.489 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-08-29 16:36:48.490419 | orchestrator | 16:36:48.490 STDOUT terraform:  + content = (known after apply) 2025-08-29 16:36:48.490423 | orchestrator | 16:36:48.490 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 16:36:48.490427 | orchestrator | 16:36:48.490 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 16:36:48.490431 | orchestrator | 16:36:48.490 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 16:36:48.490435 | orchestrator | 16:36:48.490 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 16:36:48.490439 | orchestrator | 16:36:48.490 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 16:36:48.490443 | orchestrator | 16:36:48.490 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 16:36:48.490447 | orchestrator | 16:36:48.490 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 16:36:48.490450 | orchestrator | 16:36:48.490 STDOUT terraform:  + file_permission = "0644" 2025-08-29 16:36:48.490454 | orchestrator | 16:36:48.490 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-08-29 16:36:48.490458 | orchestrator | 16:36:48.490 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.490462 | orchestrator | 16:36:48.490 STDOUT terraform:  } 2025-08-29 16:36:48.498071 | orchestrator | 16:36:48.490 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-08-29 16:36:48.498120 | orchestrator | 16:36:48.490 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-08-29 16:36:48.498125 | orchestrator | 16:36:48.490 STDOUT terraform:  + content = (known after apply) 2025-08-29 16:36:48.498130 | orchestrator | 16:36:48.490 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 16:36:48.498134 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 16:36:48.498138 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 16:36:48.498141 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 16:36:48.498145 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 16:36:48.498149 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 16:36:48.498152 | orchestrator | 16:36:48.491 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 16:36:48.498156 | orchestrator | 16:36:48.491 STDOUT terraform:  + file_permission = "0644" 2025-08-29 16:36:48.498160 | orchestrator | 16:36:48.491 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-08-29 16:36:48.498164 | orchestrator | 16:36:48.491 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.498167 | orchestrator | 16:36:48.491 STDOUT terraform:  } 2025-08-29 16:36:48.498178 | orchestrator | 16:36:48.491 STDOUT terraform:  # local_file.inventory will be created 2025-08-29 16:36:48.498181 | orchestrator | 16:36:48.491 STDOUT terraform:  + resource "local_file" "inventory" { 2025-08-29 16:36:48.498185 | orchestrator | 16:36:48.491 STDOUT terraform:  + content = (known after apply) 2025-08-29 16:36:48.498203 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 16:36:48.498209 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 16:36:48.498215 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 16:36:48.498222 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 16:36:48.498228 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 16:36:48.498235 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 16:36:48.498241 | orchestrator | 16:36:48.491 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 16:36:48.498246 | orchestrator | 16:36:48.491 STDOUT terraform:  + file_permission = "0644" 2025-08-29 16:36:48.498254 | orchestrator | 16:36:48.491 STDOUT terraform:  + filename = "inventory.ci" 2025-08-29 16:36:48.498258 | orchestrator | 16:36:48.491 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.498262 | orchestrator | 16:36:48.491 STDOUT terraform:  } 2025-08-29 16:36:48.498265 | orchestrator | 16:36:48.491 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-08-29 16:36:48.498269 | orchestrator | 16:36:48.491 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-08-29 16:36:48.498276 | orchestrator | 16:36:48.491 STDOUT terraform:  + content = (sensitive value) 2025-08-29 16:36:48.498279 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 16:36:48.498283 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 16:36:48.498287 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 16:36:48.498290 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 16:36:48.498294 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 16:36:48.498298 | orchestrator | 16:36:48.491 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 16:36:48.498315 | orchestrator | 16:36:48.491 STDOUT terraform:  + directory_permission = "0700" 2025-08-29 16:36:48.498320 | orchestrator | 16:36:48.491 STDOUT terraform:  + file_permission = "0600" 2025-08-29 16:36:48.498323 | orchestrator | 16:36:48.491 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-08-29 16:36:48.498327 | orchestrator | 16:36:48.492 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.498331 | orchestrator | 16:36:48.492 STDOUT terraform:  } 2025-08-29 16:36:48.498335 | orchestrator | 16:36:48.492 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-08-29 16:36:48.498338 | orchestrator | 16:36:48.492 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-08-29 16:36:48.498342 | orchestrator | 16:36:48.492 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.498346 | orchestrator | 16:36:48.492 STDOUT terraform:  } 2025-08-29 16:36:48.498350 | orchestrator | 16:36:48.492 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-08-29 16:36:48.498363 | orchestrator | 16:36:48.492 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-08-29 16:36:48.498367 | orchestrator | 16:36:48.492 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 16:36:48.498370 | orchestrator | 16:36:48.492 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.498374 | orchestrator | 16:36:48.492 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.498378 | orchestrator | 16:36:48.492 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 16:36:48.498382 | orchestrator | 16:36:48.492 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.498386 | orchestrator | 16:36:48.492 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-08-29 16:36:48.498389 | orchestrator | 16:36:48.492 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.498393 | orchestrator | 16:36:48.492 STDOUT terraform:  + size = 80 2025-08-29 16:36:48.498397 | orchestrator | 16:36:48.492 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 16:36:48.498401 | orchestrator | 16:36:48.492 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 16:36:48.498404 | orchestrator | 16:36:48.492 STDOUT terraform:  } 2025-08-29 16:36:48.498408 | orchestrator | 16:36:48.492 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-08-29 16:36:48.498412 | orchestrator | 16:36:48.492 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 16:36:48.498415 | orchestrator | 16:36:48.492 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 16:36:48.498419 | orchestrator | 16:36:48.492 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.498423 | orchestrator | 16:36:48.492 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.498426 | orchestrator | 16:36:48.492 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 16:36:48.498430 | orchestrator | 16:36:48.492 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.498434 | orchestrator | 16:36:48.492 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-08-29 16:36:48.498438 | orchestrator | 16:36:48.492 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.498441 | orchestrator | 16:36:48.492 STDOUT terraform:  + size = 80 2025-08-29 16:36:48.498445 | orchestrator | 16:36:48.492 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 16:36:48.498449 | orchestrator | 16:36:48.492 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 16:36:48.498452 | orchestrator | 16:36:48.492 STDOUT terraform:  } 2025-08-29 16:36:48.498456 | orchestrator | 16:36:48.492 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-08-29 16:36:48.498460 | orchestrator | 16:36:48.492 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 16:36:48.498467 | orchestrator | 16:36:48.493 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 16:36:48.498474 | orchestrator | 16:36:48.493 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.498478 | orchestrator | 16:36:48.493 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.498482 | orchestrator | 16:36:48.493 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 16:36:48.498486 | orchestrator | 16:36:48.493 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.498489 | orchestrator | 16:36:48.493 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-08-29 16:36:48.498493 | orchestrator | 16:36:48.493 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.498497 | orchestrator | 16:36:48.493 STDOUT terraform:  + size = 80 2025-08-29 16:36:48.498500 | orchestrator | 16:36:48.493 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 16:36:48.498504 | orchestrator | 16:36:48.493 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 16:36:48.498508 | orchestrator | 16:36:48.493 STDOUT terraform:  } 2025-08-29 16:36:48.498511 | orchestrator | 16:36:48.493 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-08-29 16:36:48.498515 | orchestrator | 16:36:48.493 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 16:36:48.498519 | orchestrator | 16:36:48.493 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 16:36:48.498525 | orchestrator | 16:36:48.493 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.498529 | orchestrator | 16:36:48.493 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.498532 | orchestrator | 16:36:48.493 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 16:36:48.498536 | orchestrator | 16:36:48.493 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.498540 | orchestrator | 16:36:48.493 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-08-29 16:36:48.498543 | orchestrator | 16:36:48.493 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.498547 | orchestrator | 16:36:48.493 STDOUT terraform:  + size = 80 2025-08-29 16:36:48.498551 | orchestrator | 16:36:48.493 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 16:36:48.498554 | orchestrator | 16:36:48.493 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 16:36:48.498558 | orchestrator | 16:36:48.493 STDOUT terraform:  } 2025-08-29 16:36:48.498562 | orchestrator | 16:36:48.493 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-08-29 16:36:48.498565 | orchestrator | 16:36:48.493 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 16:36:48.498569 | orchestrator | 16:36:48.493 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 16:36:48.498573 | orchestrator | 16:36:48.493 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.498577 | orchestrator | 16:36:48.493 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.498580 | orchestrator | 16:36:48.493 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 16:36:48.498584 | orchestrator | 16:36:48.493 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.498592 | orchestrator | 16:36:48.493 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-08-29 16:36:48.498595 | orchestrator | 16:36:48.494 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.498599 | orchestrator | 16:36:48.494 STDOUT terraform:  + size = 80 2025-08-29 16:36:48.498603 | orchestrator | 16:36:48.494 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 16:36:48.498606 | orchestrator | 16:36:48.494 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 16:36:48.498610 | orchestrator | 16:36:48.494 STDOUT terraform:  } 2025-08-29 16:36:48.498617 | orchestrator | 16:36:48.494 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-08-29 16:36:48.498623 | orchestrator | 16:36:48.494 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 16:36:48.498627 | orchestrator | 16:36:48.494 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 16:36:48.498630 | orchestrator | 16:36:48.494 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.498634 | orchestrator | 16:36:48.494 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.498638 | orchestrator | 16:36:48.494 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 16:36:48.498641 | orchestrator | 16:36:48.494 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.498645 | orchestrator | 16:36:48.494 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-08-29 16:36:48.498649 | orchestrator | 16:36:48.494 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.498652 | orchestrator | 16:36:48.494 STDOUT terraform:  + size = 80 2025-08-29 16:36:48.498656 | orchestrator | 16:36:48.494 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 16:36:48.498660 | orchestrator | 16:36:48.494 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 16:36:48.498664 | orchestrator | 16:36:48.494 STDOUT terraform:  } 2025-08-29 16:36:48.498667 | orchestrator | 16:36:48.494 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-08-29 16:36:48.498671 | orchestrator | 16:36:48.494 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 16:36:48.498675 | orchestrator | 16:36:48.494 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 16:36:48.498678 | orchestrator | 16:36:48.494 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.498682 | orchestrator | 16:36:48.494 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.498686 | orchestrator | 16:36:48.494 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 16:36:48.498689 | orchestrator | 16:36:48.494 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.498693 | orchestrator | 16:36:48.494 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-08-29 16:36:48.498697 | orchestrator | 16:36:48.494 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.498700 | orchestrator | 16:36:48.494 STDOUT terraform:  + size = 80 2025-08-29 16:36:48.498708 | orchestrator | 16:36:48.494 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 16:36:48.498712 | orchestrator | 16:36:48.494 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 16:36:48.498715 | orchestrator | 16:36:48.494 STDOUT terraform:  } 2025-08-29 16:36:48.498719 | orchestrator | 16:36:48.494 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-08-29 16:36:48.498723 | orchestrator | 16:36:48.494 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 16:36:48.498729 | orchestrator | 16:36:48.494 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 16:36:48.498733 | orchestrator | 16:36:48.495 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.498737 | orchestrator | 16:36:48.495 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.498740 | orchestrator | 16:36:48.495 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.498744 | orchestrator | 16:36:48.495 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-08-29 16:36:48.498748 | orchestrator | 16:36:48.495 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.498751 | orchestrator | 16:36:48.495 STDOUT terraform:  + size = 20 2025-08-29 16:36:48.498755 | orchestrator | 16:36:48.495 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 16:36:48.498759 | orchestrator | 16:36:48.495 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 16:36:48.498762 | orchestrator | 16:36:48.495 STDOUT terraform:  } 2025-08-29 16:36:48.498769 | orchestrator | 16:36:48.495 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-08-29 16:36:48.498773 | orchestrator | 16:36:48.495 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 16:36:48.498777 | orchestrator | 16:36:48.495 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 16:36:48.498780 | orchestrator | 16:36:48.495 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.498784 | orchestrator | 16:36:48.495 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.498788 | orchestrator | 16:36:48.495 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.498791 | orchestrator | 16:36:48.495 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-08-29 16:36:48.498795 | orchestrator | 16:36:48.495 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.498799 | orchestrator | 16:36:48.495 STDOUT terraform:  + size = 20 2025-08-29 16:36:48.498803 | orchestrator | 16:36:48.495 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 16:36:48.498807 | orchestrator | 16:36:48.495 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 16:36:48.498810 | orchestrator | 16:36:48.495 STDOUT terraform:  } 2025-08-29 16:36:48.498814 | orchestrator | 16:36:48.495 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-08-29 16:36:48.498818 | orchestrator | 16:36:48.495 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 16:36:48.498821 | orchestrator | 16:36:48.495 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 16:36:48.498828 | orchestrator | 16:36:48.495 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.498867 | orchestrator | 16:36:48.495 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.498871 | orchestrator | 16:36:48.495 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.498875 | orchestrator | 16:36:48.495 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-08-29 16:36:48.498879 | orchestrator | 16:36:48.495 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.498882 | orchestrator | 16:36:48.495 STDOUT terraform:  + size = 20 2025-08-29 16:36:48.498892 | orchestrator | 16:36:48.495 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 16:36:48.498896 | orchestrator | 16:36:48.495 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 16:36:48.498900 | orchestrator | 16:36:48.495 STDOUT terraform:  } 2025-08-29 16:36:48.498903 | orchestrator | 16:36:48.495 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-08-29 16:36:48.498907 | orchestrator | 16:36:48.495 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 16:36:48.498911 | orchestrator | 16:36:48.495 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 16:36:48.498914 | orchestrator | 16:36:48.495 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.498918 | orchestrator | 16:36:48.496 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.498922 | orchestrator | 16:36:48.496 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.498925 | orchestrator | 16:36:48.496 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-08-29 16:36:48.498929 | orchestrator | 16:36:48.496 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.498933 | orchestrator | 16:36:48.496 STDOUT terraform:  + size = 20 2025-08-29 16:36:48.498936 | orchestrator | 16:36:48.496 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 16:36:48.498940 | orchestrator | 16:36:48.496 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 16:36:48.498944 | orchestrator | 16:36:48.496 STDOUT terraform:  } 2025-08-29 16:36:48.498948 | orchestrator | 16:36:48.496 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-08-29 16:36:48.498954 | orchestrator | 16:36:48.496 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 16:36:48.498958 | orchestrator | 16:36:48.496 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 16:36:48.498962 | orchestrator | 16:36:48.496 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.498965 | orchestrator | 16:36:48.496 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.498969 | orchestrator | 16:36:48.496 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.498973 | orchestrator | 16:36:48.496 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-08-29 16:36:48.498977 | orchestrator | 16:36:48.496 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.498984 | orchestrator | 16:36:48.496 STDOUT terraform:  + size = 20 2025-08-29 16:36:48.498987 | orchestrator | 16:36:48.496 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 16:36:48.498991 | orchestrator | 16:36:48.496 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 16:36:48.498995 | orchestrator | 16:36:48.496 STDOUT terraform:  } 2025-08-29 16:36:48.498999 | orchestrator | 16:36:48.496 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-08-29 16:36:48.499002 | orchestrator | 16:36:48.496 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 16:36:48.499006 | orchestrator | 16:36:48.496 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 16:36:48.499010 | orchestrator | 16:36:48.496 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.499013 | orchestrator | 16:36:48.496 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.499017 | orchestrator | 16:36:48.496 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.499021 | orchestrator | 16:36:48.496 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-08-29 16:36:48.499024 | orchestrator | 16:36:48.496 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.499031 | orchestrator | 16:36:48.496 STDOUT terraform:  + size = 20 2025-08-29 16:36:48.499035 | orchestrator | 16:36:48.496 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 16:36:48.499038 | orchestrator | 16:36:48.496 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 16:36:48.499042 | orchestrator | 16:36:48.496 STDOUT terraform:  } 2025-08-29 16:36:48.499046 | orchestrator | 16:36:48.496 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-08-29 16:36:48.499049 | orchestrator | 16:36:48.496 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 16:36:48.499053 | orchestrator | 16:36:48.496 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 16:36:48.499057 | orchestrator | 16:36:48.497 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.499061 | orchestrator | 16:36:48.497 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.499064 | orchestrator | 16:36:48.497 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.499068 | orchestrator | 16:36:48.497 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-08-29 16:36:48.499072 | orchestrator | 16:36:48.497 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.499075 | orchestrator | 16:36:48.497 STDOUT terraform:  + size = 20 2025-08-29 16:36:48.499079 | orchestrator | 16:36:48.497 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 16:36:48.499083 | orchestrator | 16:36:48.497 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 16:36:48.499086 | orchestrator | 16:36:48.497 STDOUT terraform:  } 2025-08-29 16:36:48.499090 | orchestrator | 16:36:48.497 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-08-29 16:36:48.499094 | orchestrator | 16:36:48.497 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 16:36:48.499103 | orchestrator | 16:36:48.497 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 16:36:48.499107 | orchestrator | 16:36:48.497 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.499111 | orchestrator | 16:36:48.497 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.499114 | orchestrator | 16:36:48.497 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.499118 | orchestrator | 16:36:48.497 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-08-29 16:36:48.499122 | orchestrator | 16:36:48.497 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.499125 | orchestrator | 16:36:48.497 STDOUT terraform:  + size = 20 2025-08-29 16:36:48.499129 | orchestrator | 16:36:48.497 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 16:36:48.499133 | orchestrator | 16:36:48.497 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 16:36:48.499137 | orchestrator | 16:36:48.497 STDOUT terraform:  } 2025-08-29 16:36:48.499140 | orchestrator | 16:36:48.497 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-08-29 16:36:48.499144 | orchestrator | 16:36:48.497 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 16:36:48.499148 | orchestrator | 16:36:48.497 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 16:36:48.499151 | orchestrator | 16:36:48.497 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.499155 | orchestrator | 16:36:48.497 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.499159 | orchestrator | 16:36:48.497 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 16:36:48.499163 | orchestrator | 16:36:48.497 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-08-29 16:36:48.499166 | orchestrator | 16:36:48.497 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.499170 | orchestrator | 16:36:48.497 STDOUT terraform:  + size = 20 2025-08-29 16:36:48.499174 | orchestrator | 16:36:48.497 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 16:36:48.499177 | orchestrator | 16:36:48.497 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 16:36:48.499181 | orchestrator | 16:36:48.497 STDOUT terraform:  } 2025-08-29 16:36:48.499187 | orchestrator | 16:36:48.497 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-08-29 16:36:48.499191 | orchestrator | 16:36:48.497 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-08-29 16:36:48.499195 | orchestrator | 16:36:48.497 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 16:36:48.499265 | orchestrator | 16:36:48.497 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 16:36:48.499321 | orchestrator | 16:36:48.499 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 16:36:48.499365 | orchestrator | 16:36:48.499 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.499397 | orchestrator | 16:36:48.499 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.499429 | orchestrator | 16:36:48.499 STDOUT terraform:  + config_drive = true 2025-08-29 16:36:48.499476 | orchestrator | 16:36:48.499 STDOUT terraform:  + created = (known after apply) 2025-08-29 16:36:48.499516 | orchestrator | 16:36:48.499 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 16:36:48.499554 | orchestrator | 16:36:48.499 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-08-29 16:36:48.499584 | orchestrator | 16:36:48.499 STDOUT terraform:  + force_delete = false 2025-08-29 16:36:48.499626 | orchestrator | 16:36:48.499 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 16:36:48.499694 | orchestrator | 16:36:48.499 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.499762 | orchestrator | 16:36:48.499 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 16:36:48.499817 | orchestrator | 16:36:48.499 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 16:36:48.499865 | orchestrator | 16:36:48.499 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 16:36:48.499903 | orchestrator | 16:36:48.499 STDOUT terraform:  + name = "testbed-manager" 2025-08-29 16:36:48.499935 | orchestrator | 16:36:48.499 STDOUT terraform:  + power_state = "active" 2025-08-29 16:36:48.499996 | orchestrator | 16:36:48.499 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.500074 | orchestrator | 16:36:48.500 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 16:36:48.500123 | orchestrator | 16:36:48.500 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 16:36:48.500198 | orchestrator | 16:36:48.500 STDOUT terraform:  + updated = (known after apply) 2025-08-29 16:36:48.500257 | orchestrator | 16:36:48.500 STDOUT terraform:  + user_data = (sensitive value) 2025-08-29 16:36:48.500299 | orchestrator | 16:36:48.500 STDOUT terraform:  + block_device { 2025-08-29 16:36:48.500353 | orchestrator | 16:36:48.500 STDOUT terraform:  + boot_index = 0 2025-08-29 16:36:48.500419 | orchestrator | 16:36:48.500 STDOUT terraform:  + delete_on_termination = false 2025-08-29 16:36:48.500478 | orchestrator | 16:36:48.500 STDOUT terraform:  + destination_type = "volume" 2025-08-29 16:36:48.500540 | orchestrator | 16:36:48.500 STDOUT terraform:  + multiattach = false 2025-08-29 16:36:48.500600 | orchestrator | 16:36:48.500 STDOUT terraform:  + source_type = "volume" 2025-08-29 16:36:48.500676 | orchestrator | 16:36:48.500 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 16:36:48.500708 | orchestrator | 16:36:48.500 STDOUT terraform:  } 2025-08-29 16:36:48.500744 | orchestrator | 16:36:48.500 STDOUT terraform:  + network { 2025-08-29 16:36:48.500790 | orchestrator | 16:36:48.500 STDOUT terraform:  + access_network = false 2025-08-29 16:36:48.500869 | orchestrator | 16:36:48.500 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 16:36:48.500933 | orchestrator | 16:36:48.500 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 16:36:48.500999 | orchestrator | 16:36:48.500 STDOUT terraform:  + mac = (known after apply) 2025-08-29 16:36:48.501085 | orchestrator | 16:36:48.501 STDOUT terraform:  + name = (known after apply) 2025-08-29 16:36:48.501136 | orchestrator | 16:36:48.501 STDOUT terraform:  + port = (known after apply) 2025-08-29 16:36:48.501175 | orchestrator | 16:36:48.501 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 16:36:48.501196 | orchestrator | 16:36:48.501 STDOUT terraform:  } 2025-08-29 16:36:48.501219 | orchestrator | 16:36:48.501 STDOUT terraform:  } 2025-08-29 16:36:48.501270 | orchestrator | 16:36:48.501 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-08-29 16:36:48.501337 | orchestrator | 16:36:48.501 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 16:36:48.501405 | orchestrator | 16:36:48.501 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 16:36:48.501462 | orchestrator | 16:36:48.501 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 16:36:48.501505 | orchestrator | 16:36:48.501 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 16:36:48.501546 | orchestrator | 16:36:48.501 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.501578 | orchestrator | 16:36:48.501 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.501607 | orchestrator | 16:36:48.501 STDOUT terraform:  + config_drive = true 2025-08-29 16:36:48.501650 | orchestrator | 16:36:48.501 STDOUT terraform:  + created = (known after apply) 2025-08-29 16:36:48.501693 | orchestrator | 16:36:48.501 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 16:36:48.501730 | orchestrator | 16:36:48.501 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 16:36:48.501760 | orchestrator | 16:36:48.501 STDOUT terraform:  + force_delete = false 2025-08-29 16:36:48.501803 | orchestrator | 16:36:48.501 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 16:36:48.501882 | orchestrator | 16:36:48.501 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.501928 | orchestrator | 16:36:48.501 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 16:36:48.501969 | orchestrator | 16:36:48.501 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 16:36:48.502003 | orchestrator | 16:36:48.501 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 16:36:48.502057 | orchestrator | 16:36:48.502 STDOUT terraform:  + name = "testbed-node-0" 2025-08-29 16:36:48.502089 | orchestrator | 16:36:48.502 STDOUT terraform:  + power_state = "active" 2025-08-29 16:36:48.502130 | orchestrator | 16:36:48.502 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.502175 | orchestrator | 16:36:48.502 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 16:36:48.502204 | orchestrator | 16:36:48.502 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 16:36:48.502245 | orchestrator | 16:36:48.502 STDOUT terraform:  + updated = (known after apply) 2025-08-29 16:36:48.502301 | orchestrator | 16:36:48.502 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 16:36:48.502326 | orchestrator | 16:36:48.502 STDOUT terraform:  + block_device { 2025-08-29 16:36:48.502365 | orchestrator | 16:36:48.502 STDOUT terraform:  + boot_index = 0 2025-08-29 16:36:48.502399 | orchestrator | 16:36:48.502 STDOUT terraform:  + delete_on_termination = false 2025-08-29 16:36:48.502434 | orchestrator | 16:36:48.502 STDOUT terraform:  + destination_type = "volume" 2025-08-29 16:36:48.502469 | orchestrator | 16:36:48.502 STDOUT terraform:  + multiattach = false 2025-08-29 16:36:48.502504 | orchestrator | 16:36:48.502 STDOUT terraform:  + source_type = "volume" 2025-08-29 16:36:48.502548 | orchestrator | 16:36:48.502 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 16:36:48.502569 | orchestrator | 16:36:48.502 STDOUT terraform:  } 2025-08-29 16:36:48.502593 | orchestrator | 16:36:48.502 STDOUT terraform:  + network { 2025-08-29 16:36:48.502622 | orchestrator | 16:36:48.502 STDOUT terraform:  + access_network = false 2025-08-29 16:36:48.502659 | orchestrator | 16:36:48.502 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 16:36:48.502696 | orchestrator | 16:36:48.502 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 16:36:48.502733 | orchestrator | 16:36:48.502 STDOUT terraform:  + mac = (known after apply) 2025-08-29 16:36:48.502771 | orchestrator | 16:36:48.502 STDOUT terraform:  + name = (known after apply) 2025-08-29 16:36:48.502808 | orchestrator | 16:36:48.502 STDOUT terraform:  + port = (known after apply) 2025-08-29 16:36:48.502856 | orchestrator | 16:36:48.502 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 16:36:48.502879 | orchestrator | 16:36:48.502 STDOUT terraform:  } 2025-08-29 16:36:48.502899 | orchestrator | 16:36:48.502 STDOUT terraform:  } 2025-08-29 16:36:48.502948 | orchestrator | 16:36:48.502 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-08-29 16:36:48.502996 | orchestrator | 16:36:48.502 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 16:36:48.503038 | orchestrator | 16:36:48.503 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 16:36:48.503078 | orchestrator | 16:36:48.503 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 16:36:48.503118 | orchestrator | 16:36:48.503 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 16:36:48.503158 | orchestrator | 16:36:48.503 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.503187 | orchestrator | 16:36:48.503 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.503216 | orchestrator | 16:36:48.503 STDOUT terraform:  + config_drive = true 2025-08-29 16:36:48.503256 | orchestrator | 16:36:48.503 STDOUT terraform:  + created = (known after apply) 2025-08-29 16:36:48.503296 | orchestrator | 16:36:48.503 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 16:36:48.503332 | orchestrator | 16:36:48.503 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 16:36:48.503361 | orchestrator | 16:36:48.503 STDOUT terraform:  + force_delete = false 2025-08-29 16:36:48.503407 | orchestrator | 16:36:48.503 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 16:36:48.503455 | orchestrator | 16:36:48.503 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.503495 | orchestrator | 16:36:48.503 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 16:36:48.503536 | orchestrator | 16:36:48.503 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 16:36:48.503566 | orchestrator | 16:36:48.503 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 16:36:48.503602 | orchestrator | 16:36:48.503 STDOUT terraform:  + name = "testbed-node-1" 2025-08-29 16:36:48.503632 | orchestrator | 16:36:48.503 STDOUT terraform:  + power_state = "active" 2025-08-29 16:36:48.503673 | orchestrator | 16:36:48.503 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.503713 | orchestrator | 16:36:48.503 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 16:36:48.503742 | orchestrator | 16:36:48.503 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 16:36:48.503782 | orchestrator | 16:36:48.503 STDOUT terraform:  + updated = (known after apply) 2025-08-29 16:36:48.503855 | orchestrator | 16:36:48.503 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 16:36:48.503895 | orchestrator | 16:36:48.503 STDOUT terraform:  + block_device { 2025-08-29 16:36:48.503927 | orchestrator | 16:36:48.503 STDOUT terraform:  + boot_index = 0 2025-08-29 16:36:48.503967 | orchestrator | 16:36:48.503 STDOUT terraform:  + delete_on_termination = false 2025-08-29 16:36:48.504002 | orchestrator | 16:36:48.503 STDOUT terraform:  + destination_type = "volume" 2025-08-29 16:36:48.504036 | orchestrator | 16:36:48.504 STDOUT terraform:  + multiattach = false 2025-08-29 16:36:48.504071 | orchestrator | 16:36:48.504 STDOUT terraform:  + source_type = "volume" 2025-08-29 16:36:48.504119 | orchestrator | 16:36:48.504 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 16:36:48.504140 | orchestrator | 16:36:48.504 STDOUT terraform:  } 2025-08-29 16:36:48.504161 | orchestrator | 16:36:48.504 STDOUT terraform:  + network { 2025-08-29 16:36:48.504188 | orchestrator | 16:36:48.504 STDOUT terraform:  + access_network = false 2025-08-29 16:36:48.504224 | orchestrator | 16:36:48.504 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 16:36:48.504260 | orchestrator | 16:36:48.504 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 16:36:48.504298 | orchestrator | 16:36:48.504 STDOUT terraform:  + mac = (known after apply) 2025-08-29 16:36:48.504335 | orchestrator | 16:36:48.504 STDOUT terraform:  + name = (known after apply) 2025-08-29 16:36:48.504372 | orchestrator | 16:36:48.504 STDOUT terraform:  + port = (known after apply) 2025-08-29 16:36:48.504409 | orchestrator | 16:36:48.504 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 16:36:48.504429 | orchestrator | 16:36:48.504 STDOUT terraform:  } 2025-08-29 16:36:48.504449 | orchestrator | 16:36:48.504 STDOUT terraform:  } 2025-08-29 16:36:48.504497 | orchestrator | 16:36:48.504 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-08-29 16:36:48.504543 | orchestrator | 16:36:48.504 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 16:36:48.504588 | orchestrator | 16:36:48.504 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 16:36:48.504628 | orchestrator | 16:36:48.504 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 16:36:48.504668 | orchestrator | 16:36:48.504 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 16:36:48.504711 | orchestrator | 16:36:48.504 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.504741 | orchestrator | 16:36:48.504 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.504767 | orchestrator | 16:36:48.504 STDOUT terraform:  + config_drive = true 2025-08-29 16:36:48.504807 | orchestrator | 16:36:48.504 STDOUT terraform:  + created = (known after apply) 2025-08-29 16:36:48.504879 | orchestrator | 16:36:48.504 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 16:36:48.504916 | orchestrator | 16:36:48.504 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 16:36:48.504946 | orchestrator | 16:36:48.504 STDOUT terraform:  + force_delete = false 2025-08-29 16:36:48.504986 | orchestrator | 16:36:48.504 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 16:36:48.505029 | orchestrator | 16:36:48.504 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.505070 | orchestrator | 16:36:48.505 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 16:36:48.505111 | orchestrator | 16:36:48.505 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 16:36:48.505145 | orchestrator | 16:36:48.505 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 16:36:48.505182 | orchestrator | 16:36:48.505 STDOUT terraform:  + name = "testbed-node-2" 2025-08-29 16:36:48.505213 | orchestrator | 16:36:48.505 STDOUT terraform:  + power_state = "active" 2025-08-29 16:36:48.505255 | orchestrator | 16:36:48.505 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.505297 | orchestrator | 16:36:48.505 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 16:36:48.505326 | orchestrator | 16:36:48.505 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 16:36:48.505368 | orchestrator | 16:36:48.505 STDOUT terraform:  + updated = (known after apply) 2025-08-29 16:36:48.505424 | orchestrator | 16:36:48.505 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 16:36:48.505448 | orchestrator | 16:36:48.505 STDOUT terraform:  + block_device { 2025-08-29 16:36:48.505479 | orchestrator | 16:36:48.505 STDOUT terraform:  + boot_index = 0 2025-08-29 16:36:48.505512 | orchestrator | 16:36:48.505 STDOUT terraform:  + delete_on_termination = false 2025-08-29 16:36:48.505583 | orchestrator | 16:36:48.505 STDOUT terraform:  + destination_type = "volume" 2025-08-29 16:36:48.505620 | orchestrator | 16:36:48.505 STDOUT terraform:  + multiattach = false 2025-08-29 16:36:48.505658 | orchestrator | 16:36:48.505 STDOUT terraform:  + source_type = "volume" 2025-08-29 16:36:48.505703 | orchestrator | 16:36:48.505 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 16:36:48.505730 | orchestrator | 16:36:48.505 STDOUT terraform:  } 2025-08-29 16:36:48.505757 | orchestrator | 16:36:48.505 STDOUT terraform:  + network { 2025-08-29 16:36:48.505784 | orchestrator | 16:36:48.505 STDOUT terraform:  + access_network = false 2025-08-29 16:36:48.505822 | orchestrator | 16:36:48.505 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 16:36:48.505870 | orchestrator | 16:36:48.505 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 16:36:48.505908 | orchestrator | 16:36:48.505 STDOUT terraform:  + mac = (known after apply) 2025-08-29 16:36:48.505945 | orchestrator | 16:36:48.505 STDOUT terraform:  + name = (known after apply) 2025-08-29 16:36:48.505981 | orchestrator | 16:36:48.505 STDOUT terraform:  + port = (known after apply) 2025-08-29 16:36:48.506033 | orchestrator | 16:36:48.505 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 16:36:48.506056 | orchestrator | 16:36:48.506 STDOUT terraform:  } 2025-08-29 16:36:48.506079 | orchestrator | 16:36:48.506 STDOUT terraform:  } 2025-08-29 16:36:48.506191 | orchestrator | 16:36:48.506 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-08-29 16:36:48.506250 | orchestrator | 16:36:48.506 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 16:36:48.506292 | orchestrator | 16:36:48.506 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 16:36:48.506332 | orchestrator | 16:36:48.506 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 16:36:48.506373 | orchestrator | 16:36:48.506 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 16:36:48.506414 | orchestrator | 16:36:48.506 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.506446 | orchestrator | 16:36:48.506 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.506473 | orchestrator | 16:36:48.506 STDOUT terraform:  + config_drive = true 2025-08-29 16:36:48.506513 | orchestrator | 16:36:48.506 STDOUT terraform:  + created = (known after apply) 2025-08-29 16:36:48.506553 | orchestrator | 16:36:48.506 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 16:36:48.506588 | orchestrator | 16:36:48.506 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 16:36:48.506617 | orchestrator | 16:36:48.506 STDOUT terraform:  + force_delete = false 2025-08-29 16:36:48.506657 | orchestrator | 16:36:48.506 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 16:36:48.506699 | orchestrator | 16:36:48.506 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.506740 | orchestrator | 16:36:48.506 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 16:36:48.506781 | orchestrator | 16:36:48.506 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 16:36:48.506813 | orchestrator | 16:36:48.506 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 16:36:48.506859 | orchestrator | 16:36:48.506 STDOUT terraform:  + name = "testbed-node-3" 2025-08-29 16:36:48.506891 | orchestrator | 16:36:48.506 STDOUT terraform:  + power_state = "active" 2025-08-29 16:36:48.506941 | orchestrator | 16:36:48.506 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.506982 | orchestrator | 16:36:48.506 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 16:36:48.507011 | orchestrator | 16:36:48.506 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 16:36:48.507054 | orchestrator | 16:36:48.507 STDOUT terraform:  + updated = (known after apply) 2025-08-29 16:36:48.507109 | orchestrator | 16:36:48.507 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 16:36:48.507134 | orchestrator | 16:36:48.507 STDOUT terraform:  + block_device { 2025-08-29 16:36:48.507167 | orchestrator | 16:36:48.507 STDOUT terraform:  + boot_index = 0 2025-08-29 16:36:48.507202 | orchestrator | 16:36:48.507 STDOUT terraform:  + delete_on_termination = false 2025-08-29 16:36:48.507238 | orchestrator | 16:36:48.507 STDOUT terraform:  + destination_type = "volume" 2025-08-29 16:36:48.507274 | orchestrator | 16:36:48.507 STDOUT terraform:  + multiattach = false 2025-08-29 16:36:48.507310 | orchestrator | 16:36:48.507 STDOUT terraform:  + source_type = "volume" 2025-08-29 16:36:48.507355 | orchestrator | 16:36:48.507 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 16:36:48.507376 | orchestrator | 16:36:48.507 STDOUT terraform:  } 2025-08-29 16:36:48.507397 | orchestrator | 16:36:48.507 STDOUT terraform:  + network { 2025-08-29 16:36:48.507423 | orchestrator | 16:36:48.507 STDOUT terraform:  + access_network = false 2025-08-29 16:36:48.507460 | orchestrator | 16:36:48.507 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 16:36:48.507496 | orchestrator | 16:36:48.507 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 16:36:48.507533 | orchestrator | 16:36:48.507 STDOUT terraform:  + mac = (known after apply) 2025-08-29 16:36:48.507573 | orchestrator | 16:36:48.507 STDOUT terraform:  + name = (known after apply) 2025-08-29 16:36:48.507629 | orchestrator | 16:36:48.507 STDOUT terraform:  + port = (known after apply) 2025-08-29 16:36:48.507687 | orchestrator | 16:36:48.507 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 16:36:48.507710 | orchestrator | 16:36:48.507 STDOUT terraform:  } 2025-08-29 16:36:48.507732 | orchestrator | 16:36:48.507 STDOUT terraform:  } 2025-08-29 16:36:48.507781 | orchestrator | 16:36:48.507 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-08-29 16:36:48.507857 | orchestrator | 16:36:48.507 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 16:36:48.507903 | orchestrator | 16:36:48.507 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 16:36:48.507944 | orchestrator | 16:36:48.507 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 16:36:48.507984 | orchestrator | 16:36:48.507 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 16:36:48.508027 | orchestrator | 16:36:48.507 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.508059 | orchestrator | 16:36:48.508 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.508093 | orchestrator | 16:36:48.508 STDOUT terraform:  + config_drive = true 2025-08-29 16:36:48.508138 | orchestrator | 16:36:48.508 STDOUT terraform:  + created = (known after apply) 2025-08-29 16:36:48.508179 | orchestrator | 16:36:48.508 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 16:36:48.508215 | orchestrator | 16:36:48.508 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 16:36:48.508244 | orchestrator | 16:36:48.508 STDOUT terraform:  + force_delete = false 2025-08-29 16:36:48.508285 | orchestrator | 16:36:48.508 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 16:36:48.508327 | orchestrator | 16:36:48.508 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.508368 | orchestrator | 16:36:48.508 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 16:36:48.508411 | orchestrator | 16:36:48.508 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 16:36:48.508444 | orchestrator | 16:36:48.508 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 16:36:48.508481 | orchestrator | 16:36:48.508 STDOUT terraform:  + name = "testbed-node-4" 2025-08-29 16:36:48.508511 | orchestrator | 16:36:48.508 STDOUT terraform:  + power_state = "active" 2025-08-29 16:36:48.508553 | orchestrator | 16:36:48.508 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.508593 | orchestrator | 16:36:48.508 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 16:36:48.508626 | orchestrator | 16:36:48.508 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 16:36:48.508690 | orchestrator | 16:36:48.508 STDOUT terraform:  + updated = (known after apply) 2025-08-29 16:36:48.508747 | orchestrator | 16:36:48.508 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 16:36:48.508771 | orchestrator | 16:36:48.508 STDOUT terraform:  + block_device { 2025-08-29 16:36:48.508801 | orchestrator | 16:36:48.508 STDOUT terraform:  + boot_index = 0 2025-08-29 16:36:48.508852 | orchestrator | 16:36:48.508 STDOUT terraform:  + delete_on_termination = false 2025-08-29 16:36:48.508889 | orchestrator | 16:36:48.508 STDOUT terraform:  + destination_type = "volume" 2025-08-29 16:36:48.508924 | orchestrator | 16:36:48.508 STDOUT terraform:  + multiattach = false 2025-08-29 16:36:48.508959 | orchestrator | 16:36:48.508 STDOUT terraform:  + source_type = "volume" 2025-08-29 16:36:48.509003 | orchestrator | 16:36:48.508 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 16:36:48.509025 | orchestrator | 16:36:48.509 STDOUT terraform:  } 2025-08-29 16:36:48.509047 | orchestrator | 16:36:48.509 STDOUT terraform:  + network { 2025-08-29 16:36:48.509075 | orchestrator | 16:36:48.509 STDOUT terraform:  + access_network = false 2025-08-29 16:36:48.509112 | orchestrator | 16:36:48.509 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 16:36:48.509148 | orchestrator | 16:36:48.509 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 16:36:48.509188 | orchestrator | 16:36:48.509 STDOUT terraform:  + mac = (known after apply) 2025-08-29 16:36:48.509230 | orchestrator | 16:36:48.509 STDOUT terraform:  + name = (known after apply) 2025-08-29 16:36:48.509267 | orchestrator | 16:36:48.509 STDOUT terraform:  + port = (known after apply) 2025-08-29 16:36:48.509304 | orchestrator | 16:36:48.509 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 16:36:48.509329 | orchestrator | 16:36:48.509 STDOUT terraform:  } 2025-08-29 16:36:48.509349 | orchestrator | 16:36:48.509 STDOUT terraform:  } 2025-08-29 16:36:48.509398 | orchestrator | 16:36:48.509 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-08-29 16:36:48.509467 | orchestrator | 16:36:48.509 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 16:36:48.509528 | orchestrator | 16:36:48.509 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 16:36:48.509589 | orchestrator | 16:36:48.509 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 16:36:48.509647 | orchestrator | 16:36:48.509 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 16:36:48.509711 | orchestrator | 16:36:48.509 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.509763 | orchestrator | 16:36:48.509 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 16:36:48.509810 | orchestrator | 16:36:48.509 STDOUT terraform:  + config_drive = true 2025-08-29 16:36:48.509896 | orchestrator | 16:36:48.509 STDOUT terraform:  + created = (known after apply) 2025-08-29 16:36:48.509943 | orchestrator | 16:36:48.509 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 16:36:48.509981 | orchestrator | 16:36:48.509 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 16:36:48.510011 | orchestrator | 16:36:48.509 STDOUT terraform:  + force_delete = false 2025-08-29 16:36:48.510079 | orchestrator | 16:36:48.510 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 16:36:48.510124 | orchestrator | 16:36:48.510 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.510165 | orchestrator | 16:36:48.510 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 16:36:48.510206 | orchestrator | 16:36:48.510 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 16:36:48.510237 | orchestrator | 16:36:48.510 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 16:36:48.510273 | orchestrator | 16:36:48.510 STDOUT terraform:  + name = "testbed-node-5" 2025-08-29 16:36:48.510303 | orchestrator | 16:36:48.510 STDOUT terraform:  + power_state = "active" 2025-08-29 16:36:48.510346 | orchestrator | 16:36:48.510 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.510385 | orchestrator | 16:36:48.510 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 16:36:48.510413 | orchestrator | 16:36:48.510 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 16:36:48.510453 | orchestrator | 16:36:48.510 STDOUT terraform:  + updated = (known after apply) 2025-08-29 16:36:48.510507 | orchestrator | 16:36:48.510 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 16:36:48.510530 | orchestrator | 16:36:48.510 STDOUT terraform:  + block_device { 2025-08-29 16:36:48.510570 | orchestrator | 16:36:48.510 STDOUT terraform:  + boot_index = 0 2025-08-29 16:36:48.510605 | orchestrator | 16:36:48.510 STDOUT terraform:  + delete_on_termination = false 2025-08-29 16:36:48.510640 | orchestrator | 16:36:48.510 STDOUT terraform:  + destination_type = "volume" 2025-08-29 16:36:48.510677 | orchestrator | 16:36:48.510 STDOUT terraform:  + multiattach = false 2025-08-29 16:36:48.515110 | orchestrator | 16:36:48.510 STDOUT terraform:  + source_type = "volume" 2025-08-29 16:36:48.515245 | orchestrator | 16:36:48.515 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 16:36:48.515273 | orchestrator | 16:36:48.515 STDOUT terraform:  } 2025-08-29 16:36:48.515299 | orchestrator | 16:36:48.515 STDOUT terraform:  + network { 2025-08-29 16:36:48.515330 | orchestrator | 16:36:48.515 STDOUT terraform:  + access_network = false 2025-08-29 16:36:48.515368 | orchestrator | 16:36:48.515 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 16:36:48.515407 | orchestrator | 16:36:48.515 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 16:36:48.515446 | orchestrator | 16:36:48.515 STDOUT terraform:  + mac = (known after apply) 2025-08-29 16:36:48.515486 | orchestrator | 16:36:48.515 STDOUT terraform:  + name = (known after apply) 2025-08-29 16:36:48.515532 | orchestrator | 16:36:48.515 STDOUT terraform:  + port = (known after apply) 2025-08-29 16:36:48.515572 | orchestrator | 16:36:48.515 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 16:36:48.515596 | orchestrator | 16:36:48.515 STDOUT terraform:  } 2025-08-29 16:36:48.515620 | orchestrator | 16:36:48.515 STDOUT terraform:  } 2025-08-29 16:36:48.515662 | orchestrator | 16:36:48.515 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-08-29 16:36:48.515704 | orchestrator | 16:36:48.515 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-08-29 16:36:48.515740 | orchestrator | 16:36:48.515 STDOUT terraform:  + fingerprint = (known after apply) 2025-08-29 16:36:48.515779 | orchestrator | 16:36:48.515 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.515810 | orchestrator | 16:36:48.515 STDOUT terraform:  + name = "testbed" 2025-08-29 16:36:48.515878 | orchestrator | 16:36:48.515 STDOUT terraform:  + private_key = (sensitive value) 2025-08-29 16:36:48.515915 | orchestrator | 16:36:48.515 STDOUT terraform:  + public_key = (known after apply) 2025-08-29 16:36:48.515949 | orchestrator | 16:36:48.515 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.515986 | orchestrator | 16:36:48.515 STDOUT terraform:  + user_id = (known after apply) 2025-08-29 16:36:48.516011 | orchestrator | 16:36:48.515 STDOUT terraform:  } 2025-08-29 16:36:48.516072 | orchestrator | 16:36:48.516 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-08-29 16:36:48.516130 | orchestrator | 16:36:48.516 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 16:36:48.516167 | orchestrator | 16:36:48.516 STDOUT terraform:  + device = (known after apply) 2025-08-29 16:36:48.516203 | orchestrator | 16:36:48.516 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.516247 | orchestrator | 16:36:48.516 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 16:36:48.516282 | orchestrator | 16:36:48.516 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.516316 | orchestrator | 16:36:48.516 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 16:36:48.516335 | orchestrator | 16:36:48.516 STDOUT terraform:  } 2025-08-29 16:36:48.516392 | orchestrator | 16:36:48.516 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-08-29 16:36:48.516449 | orchestrator | 16:36:48.516 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 16:36:48.516484 | orchestrator | 16:36:48.516 STDOUT terraform:  + device = (known after apply) 2025-08-29 16:36:48.516518 | orchestrator | 16:36:48.516 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.516552 | orchestrator | 16:36:48.516 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 16:36:48.516589 | orchestrator | 16:36:48.516 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.516623 | orchestrator | 16:36:48.516 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 16:36:48.516642 | orchestrator | 16:36:48.516 STDOUT terraform:  } 2025-08-29 16:36:48.516697 | orchestrator | 16:36:48.516 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-08-29 16:36:48.516752 | orchestrator | 16:36:48.516 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 16:36:48.516786 | orchestrator | 16:36:48.516 STDOUT terraform:  + device = (known after apply) 2025-08-29 16:36:48.516820 | orchestrator | 16:36:48.516 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.516869 | orchestrator | 16:36:48.516 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 16:36:48.516904 | orchestrator | 16:36:48.516 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.516938 | orchestrator | 16:36:48.516 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 16:36:48.516958 | orchestrator | 16:36:48.516 STDOUT terraform:  } 2025-08-29 16:36:48.517013 | orchestrator | 16:36:48.516 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-08-29 16:36:48.517067 | orchestrator | 16:36:48.517 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 16:36:48.517101 | orchestrator | 16:36:48.517 STDOUT terraform:  + device = (known after apply) 2025-08-29 16:36:48.517137 | orchestrator | 16:36:48.517 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.517171 | orchestrator | 16:36:48.517 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 16:36:48.517206 | orchestrator | 16:36:48.517 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.517242 | orchestrator | 16:36:48.517 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 16:36:48.517263 | orchestrator | 16:36:48.517 STDOUT terraform:  } 2025-08-29 16:36:48.517320 | orchestrator | 16:36:48.517 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-08-29 16:36:48.517383 | orchestrator | 16:36:48.517 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 16:36:48.517417 | orchestrator | 16:36:48.517 STDOUT terraform:  + device = (known after apply) 2025-08-29 16:36:48.517452 | orchestrator | 16:36:48.517 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.517487 | orchestrator | 16:36:48.517 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 16:36:48.517521 | orchestrator | 16:36:48.517 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.517556 | orchestrator | 16:36:48.517 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 16:36:48.517575 | orchestrator | 16:36:48.517 STDOUT terraform:  } 2025-08-29 16:36:48.517629 | orchestrator | 16:36:48.517 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-08-29 16:36:48.517684 | orchestrator | 16:36:48.517 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 16:36:48.517719 | orchestrator | 16:36:48.517 STDOUT terraform:  + device = (known after apply) 2025-08-29 16:36:48.517755 | orchestrator | 16:36:48.517 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.517789 | orchestrator | 16:36:48.517 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 16:36:48.517824 | orchestrator | 16:36:48.517 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.517871 | orchestrator | 16:36:48.517 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 16:36:48.517897 | orchestrator | 16:36:48.517 STDOUT terraform:  } 2025-08-29 16:36:48.517988 | orchestrator | 16:36:48.517 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-08-29 16:36:48.518065 | orchestrator | 16:36:48.518 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 16:36:48.518104 | orchestrator | 16:36:48.518 STDOUT terraform:  + device = (known after apply) 2025-08-29 16:36:48.518140 | orchestrator | 16:36:48.518 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.518175 | orchestrator | 16:36:48.518 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 16:36:48.518211 | orchestrator | 16:36:48.518 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.518246 | orchestrator | 16:36:48.518 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 16:36:48.518266 | orchestrator | 16:36:48.518 STDOUT terraform:  } 2025-08-29 16:36:48.518331 | orchestrator | 16:36:48.518 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-08-29 16:36:48.518385 | orchestrator | 16:36:48.518 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 16:36:48.518420 | orchestrator | 16:36:48.518 STDOUT terraform:  + device = (known after apply) 2025-08-29 16:36:48.518454 | orchestrator | 16:36:48.518 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.518488 | orchestrator | 16:36:48.518 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 16:36:48.518522 | orchestrator | 16:36:48.518 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.518562 | orchestrator | 16:36:48.518 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 16:36:48.518582 | orchestrator | 16:36:48.518 STDOUT terraform:  } 2025-08-29 16:36:48.518636 | orchestrator | 16:36:48.518 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-08-29 16:36:48.518689 | orchestrator | 16:36:48.518 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 16:36:48.518727 | orchestrator | 16:36:48.518 STDOUT terraform:  + device = (known after apply) 2025-08-29 16:36:48.518763 | orchestrator | 16:36:48.518 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.518799 | orchestrator | 16:36:48.518 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 16:36:48.518939 | orchestrator | 16:36:48.518 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.519000 | orchestrator | 16:36:48.518 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 16:36:48.519021 | orchestrator | 16:36:48.519 STDOUT terraform:  } 2025-08-29 16:36:48.519096 | orchestrator | 16:36:48.519 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-08-29 16:36:48.519159 | orchestrator | 16:36:48.519 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-08-29 16:36:48.519194 | orchestrator | 16:36:48.519 STDOUT terraform:  + fixed_ip = (known after apply) 2025-08-29 16:36:48.519228 | orchestrator | 16:36:48.519 STDOUT terraform:  + floating_ip = (known after apply) 2025-08-29 16:36:48.519264 | orchestrator | 16:36:48.519 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.519299 | orchestrator | 16:36:48.519 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 16:36:48.519333 | orchestrator | 16:36:48.519 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.519353 | orchestrator | 16:36:48.519 STDOUT terraform:  } 2025-08-29 16:36:48.519406 | orchestrator | 16:36:48.519 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-08-29 16:36:48.519461 | orchestrator | 16:36:48.519 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-08-29 16:36:48.519493 | orchestrator | 16:36:48.519 STDOUT terraform:  + address = (known after apply) 2025-08-29 16:36:48.519526 | orchestrator | 16:36:48.519 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.519557 | orchestrator | 16:36:48.519 STDOUT terraform:  + dns_domain = (known after apply) 2025-08-29 16:36:48.519589 | orchestrator | 16:36:48.519 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 16:36:48.519622 | orchestrator | 16:36:48.519 STDOUT terraform:  + fixed_ip = (known after apply) 2025-08-29 16:36:48.519653 | orchestrator | 16:36:48.519 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.519680 | orchestrator | 16:36:48.519 STDOUT terraform:  + pool = "public" 2025-08-29 16:36:48.519711 | orchestrator | 16:36:48.519 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 16:36:48.519742 | orchestrator | 16:36:48.519 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.519773 | orchestrator | 16:36:48.519 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 16:36:48.519809 | orchestrator | 16:36:48.519 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.519843 | orchestrator | 16:36:48.519 STDOUT terraform:  } 2025-08-29 16:36:48.519894 | orchestrator | 16:36:48.519 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-08-29 16:36:48.519952 | orchestrator | 16:36:48.519 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-08-29 16:36:48.519996 | orchestrator | 16:36:48.519 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 16:36:48.520039 | orchestrator | 16:36:48.520 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.520069 | orchestrator | 16:36:48.520 STDOUT terraform:  + availability_zone_hints = [ 2025-08-29 16:36:48.520092 | orchestrator | 16:36:48.520 STDOUT terraform:  + "nova", 2025-08-29 16:36:48.520113 | orchestrator | 16:36:48.520 STDOUT terraform:  ] 2025-08-29 16:36:48.520158 | orchestrator | 16:36:48.520 STDOUT terraform:  + dns_domain = (known after apply) 2025-08-29 16:36:48.520202 | orchestrator | 16:36:48.520 STDOUT terraform:  + external = (known after apply) 2025-08-29 16:36:48.520245 | orchestrator | 16:36:48.520 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.520297 | orchestrator | 16:36:48.520 STDOUT terraform:  + mtu = (known after apply) 2025-08-29 16:36:48.520343 | orchestrator | 16:36:48.520 STDOUT terraform:  + name = "net-testbed-management" 2025-08-29 16:36:48.520385 | orchestrator | 16:36:48.520 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 16:36:48.520426 | orchestrator | 16:36:48.520 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 16:36:48.520469 | orchestrator | 16:36:48.520 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.520515 | orchestrator | 16:36:48.520 STDOUT terraform:  + shared = (known after apply) 2025-08-29 16:36:48.520559 | orchestrator | 16:36:48.520 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.520601 | orchestrator | 16:36:48.520 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-08-29 16:36:48.520633 | orchestrator | 16:36:48.520 STDOUT terraform:  + segments (known after apply) 2025-08-29 16:36:48.520654 | orchestrator | 16:36:48.520 STDOUT terraform:  } 2025-08-29 16:36:48.520708 | orchestrator | 16:36:48.520 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-08-29 16:36:48.520762 | orchestrator | 16:36:48.520 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-08-29 16:36:48.520804 | orchestrator | 16:36:48.520 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 16:36:48.520858 | orchestrator | 16:36:48.520 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 16:36:48.520899 | orchestrator | 16:36:48.520 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 16:36:48.520940 | orchestrator | 16:36:48.520 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.520981 | orchestrator | 16:36:48.520 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 16:36:48.521027 | orchestrator | 16:36:48.520 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 16:36:48.521068 | orchestrator | 16:36:48.521 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 16:36:48.521110 | orchestrator | 16:36:48.521 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 16:36:48.521151 | orchestrator | 16:36:48.521 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.521193 | orchestrator | 16:36:48.521 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 16:36:48.521234 | orchestrator | 16:36:48.521 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 16:36:48.521275 | orchestrator | 16:36:48.521 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 16:36:48.521315 | orchestrator | 16:36:48.521 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 16:36:48.521356 | orchestrator | 16:36:48.521 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.521401 | orchestrator | 16:36:48.521 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 16:36:48.521442 | orchestrator | 16:36:48.521 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.521468 | orchestrator | 16:36:48.521 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.521502 | orchestrator | 16:36:48.521 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 16:36:48.521522 | orchestrator | 16:36:48.521 STDOUT terraform:  } 2025-08-29 16:36:48.521548 | orchestrator | 16:36:48.521 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.521583 | orchestrator | 16:36:48.521 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 16:36:48.521603 | orchestrator | 16:36:48.521 STDOUT terraform:  } 2025-08-29 16:36:48.521633 | orchestrator | 16:36:48.521 STDOUT terraform:  + binding (known after apply) 2025-08-29 16:36:48.521653 | orchestrator | 16:36:48.521 STDOUT terraform:  + fixed_ip { 2025-08-29 16:36:48.521682 | orchestrator | 16:36:48.521 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-08-29 16:36:48.521717 | orchestrator | 16:36:48.521 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 16:36:48.521738 | orchestrator | 16:36:48.521 STDOUT terraform:  } 2025-08-29 16:36:48.521757 | orchestrator | 16:36:48.521 STDOUT terraform:  } 2025-08-29 16:36:48.521809 | orchestrator | 16:36:48.521 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-08-29 16:36:48.521887 | orchestrator | 16:36:48.521 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 16:36:48.521933 | orchestrator | 16:36:48.521 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 16:36:48.521975 | orchestrator | 16:36:48.521 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 16:36:48.522032 | orchestrator | 16:36:48.521 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 16:36:48.522076 | orchestrator | 16:36:48.522 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.522120 | orchestrator | 16:36:48.522 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 16:36:48.522168 | orchestrator | 16:36:48.522 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 16:36:48.522209 | orchestrator | 16:36:48.522 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 16:36:48.522251 | orchestrator | 16:36:48.522 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 16:36:48.522293 | orchestrator | 16:36:48.522 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.522339 | orchestrator | 16:36:48.522 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 16:36:48.522383 | orchestrator | 16:36:48.522 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 16:36:48.522423 | orchestrator | 16:36:48.522 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 16:36:48.522466 | orchestrator | 16:36:48.522 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 16:36:48.522510 | orchestrator | 16:36:48.522 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.522550 | orchestrator | 16:36:48.522 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 16:36:48.522591 | orchestrator | 16:36:48.522 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.522620 | orchestrator | 16:36:48.522 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.522654 | orchestrator | 16:36:48.522 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 16:36:48.522674 | orchestrator | 16:36:48.522 STDOUT terraform:  } 2025-08-29 16:36:48.522700 | orchestrator | 16:36:48.522 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.522736 | orchestrator | 16:36:48.522 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 16:36:48.522757 | orchestrator | 16:36:48.522 STDOUT terraform:  } 2025-08-29 16:36:48.522782 | orchestrator | 16:36:48.522 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.522816 | orchestrator | 16:36:48.522 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 16:36:48.522850 | orchestrator | 16:36:48.522 STDOUT terraform:  } 2025-08-29 16:36:48.522876 | orchestrator | 16:36:48.522 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.522910 | orchestrator | 16:36:48.522 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 16:36:48.522929 | orchestrator | 16:36:48.522 STDOUT terraform:  } 2025-08-29 16:36:48.522958 | orchestrator | 16:36:48.522 STDOUT terraform:  + binding (known after apply) 2025-08-29 16:36:48.522979 | orchestrator | 16:36:48.522 STDOUT terraform:  + fixed_ip { 2025-08-29 16:36:48.523009 | orchestrator | 16:36:48.522 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-08-29 16:36:48.523046 | orchestrator | 16:36:48.523 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 16:36:48.523066 | orchestrator | 16:36:48.523 STDOUT terraform:  } 2025-08-29 16:36:48.523086 | orchestrator | 16:36:48.523 STDOUT terraform:  } 2025-08-29 16:36:48.523138 | orchestrator | 16:36:48.523 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-08-29 16:36:48.523192 | orchestrator | 16:36:48.523 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 16:36:48.523240 | orchestrator | 16:36:48.523 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 16:36:48.523283 | orchestrator | 16:36:48.523 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 16:36:48.523324 | orchestrator | 16:36:48.523 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 16:36:48.523369 | orchestrator | 16:36:48.523 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.523411 | orchestrator | 16:36:48.523 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 16:36:48.523453 | orchestrator | 16:36:48.523 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 16:36:48.523496 | orchestrator | 16:36:48.523 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 16:36:48.523538 | orchestrator | 16:36:48.523 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 16:36:48.523581 | orchestrator | 16:36:48.523 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.523623 | orchestrator | 16:36:48.523 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 16:36:48.523664 | orchestrator | 16:36:48.523 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 16:36:48.523704 | orchestrator | 16:36:48.523 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 16:36:48.523745 | orchestrator | 16:36:48.523 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 16:36:48.523787 | orchestrator | 16:36:48.523 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.523828 | orchestrator | 16:36:48.523 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 16:36:48.523879 | orchestrator | 16:36:48.523 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.523914 | orchestrator | 16:36:48.523 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.523949 | orchestrator | 16:36:48.523 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 16:36:48.523970 | orchestrator | 16:36:48.523 STDOUT terraform:  } 2025-08-29 16:36:48.523995 | orchestrator | 16:36:48.523 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.524030 | orchestrator | 16:36:48.524 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 16:36:48.524051 | orchestrator | 16:36:48.524 STDOUT terraform:  } 2025-08-29 16:36:48.524076 | orchestrator | 16:36:48.524 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.524109 | orchestrator | 16:36:48.524 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 16:36:48.524128 | orchestrator | 16:36:48.524 STDOUT terraform:  } 2025-08-29 16:36:48.524153 | orchestrator | 16:36:48.524 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.524187 | orchestrator | 16:36:48.524 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 16:36:48.524208 | orchestrator | 16:36:48.524 STDOUT terraform:  } 2025-08-29 16:36:48.524236 | orchestrator | 16:36:48.524 STDOUT terraform:  + binding (known after apply) 2025-08-29 16:36:48.524257 | orchestrator | 16:36:48.524 STDOUT terraform:  + fixed_ip { 2025-08-29 16:36:48.524292 | orchestrator | 16:36:48.524 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-08-29 16:36:48.524326 | orchestrator | 16:36:48.524 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 16:36:48.524346 | orchestrator | 16:36:48.524 STDOUT terraform:  } 2025-08-29 16:36:48.524365 | orchestrator | 16:36:48.524 STDOUT terraform:  } 2025-08-29 16:36:48.524416 | orchestrator | 16:36:48.524 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-08-29 16:36:48.524466 | orchestrator | 16:36:48.524 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 16:36:48.524508 | orchestrator | 16:36:48.524 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 16:36:48.524550 | orchestrator | 16:36:48.524 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 16:36:48.524590 | orchestrator | 16:36:48.524 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 16:36:48.524630 | orchestrator | 16:36:48.524 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.524671 | orchestrator | 16:36:48.524 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 16:36:48.524713 | orchestrator | 16:36:48.524 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 16:36:48.524754 | orchestrator | 16:36:48.524 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 16:36:48.524795 | orchestrator | 16:36:48.524 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 16:36:48.524849 | orchestrator | 16:36:48.524 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.524891 | orchestrator | 16:36:48.524 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 16:36:48.524932 | orchestrator | 16:36:48.524 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 16:36:48.524972 | orchestrator | 16:36:48.524 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 16:36:48.525015 | orchestrator | 16:36:48.524 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 16:36:48.525056 | orchestrator | 16:36:48.525 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.525096 | orchestrator | 16:36:48.525 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 16:36:48.525137 | orchestrator | 16:36:48.525 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.525162 | orchestrator | 16:36:48.525 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.525196 | orchestrator | 16:36:48.525 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 16:36:48.525216 | orchestrator | 16:36:48.525 STDOUT terraform:  } 2025-08-29 16:36:48.525241 | orchestrator | 16:36:48.525 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.525275 | orchestrator | 16:36:48.525 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 16:36:48.525295 | orchestrator | 16:36:48.525 STDOUT terraform:  } 2025-08-29 16:36:48.525322 | orchestrator | 16:36:48.525 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.525355 | orchestrator | 16:36:48.525 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 16:36:48.525380 | orchestrator | 16:36:48.525 STDOUT terraform:  } 2025-08-29 16:36:48.525405 | orchestrator | 16:36:48.525 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.525438 | orchestrator | 16:36:48.525 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 16:36:48.525458 | orchestrator | 16:36:48.525 STDOUT terraform:  } 2025-08-29 16:36:48.525486 | orchestrator | 16:36:48.525 STDOUT terraform:  + binding (known after apply) 2025-08-29 16:36:48.525506 | orchestrator | 16:36:48.525 STDOUT terraform:  + fixed_ip { 2025-08-29 16:36:48.525537 | orchestrator | 16:36:48.525 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-08-29 16:36:48.525571 | orchestrator | 16:36:48.525 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 16:36:48.525590 | orchestrator | 16:36:48.525 STDOUT terraform:  } 2025-08-29 16:36:48.525611 | orchestrator | 16:36:48.525 STDOUT terraform:  } 2025-08-29 16:36:48.525664 | orchestrator | 16:36:48.525 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-08-29 16:36:48.525714 | orchestrator | 16:36:48.525 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 16:36:48.525759 | orchestrator | 16:36:48.525 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 16:36:48.525800 | orchestrator | 16:36:48.525 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 16:36:48.530134 | orchestrator | 16:36:48.525 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 16:36:48.530218 | orchestrator | 16:36:48.525 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.530234 | orchestrator | 16:36:48.525 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 16:36:48.530246 | orchestrator | 16:36:48.525 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 16:36:48.530271 | orchestrator | 16:36:48.525 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 16:36:48.530283 | orchestrator | 16:36:48.526 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 16:36:48.530298 | orchestrator | 16:36:48.526 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.530309 | orchestrator | 16:36:48.526 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 16:36:48.530320 | orchestrator | 16:36:48.526 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 16:36:48.530330 | orchestrator | 16:36:48.526 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 16:36:48.530340 | orchestrator | 16:36:48.526 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 16:36:48.530351 | orchestrator | 16:36:48.526 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.530362 | orchestrator | 16:36:48.526 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 16:36:48.530372 | orchestrator | 16:36:48.526 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.530384 | orchestrator | 16:36:48.526 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.530395 | orchestrator | 16:36:48.526 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 16:36:48.530426 | orchestrator | 16:36:48.526 STDOUT terraform:  } 2025-08-29 16:36:48.530438 | orchestrator | 16:36:48.526 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.530449 | orchestrator | 16:36:48.526 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 16:36:48.530461 | orchestrator | 16:36:48.526 STDOUT terraform:  } 2025-08-29 16:36:48.530471 | orchestrator | 16:36:48.526 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.530481 | orchestrator | 16:36:48.526 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 16:36:48.530492 | orchestrator | 16:36:48.526 STDOUT terraform:  } 2025-08-29 16:36:48.530503 | orchestrator | 16:36:48.526 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.530514 | orchestrator | 16:36:48.526 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 16:36:48.530525 | orchestrator | 16:36:48.526 STDOUT terraform:  } 2025-08-29 16:36:48.530535 | orchestrator | 16:36:48.526 STDOUT terraform:  + binding (known after apply) 2025-08-29 16:36:48.530545 | orchestrator | 16:36:48.526 STDOUT terraform:  + fixed_ip { 2025-08-29 16:36:48.530555 | orchestrator | 16:36:48.526 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-08-29 16:36:48.530566 | orchestrator | 16:36:48.526 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 16:36:48.530578 | orchestrator | 16:36:48.526 STDOUT terraform:  } 2025-08-29 16:36:48.530589 | orchestrator | 16:36:48.526 STDOUT terraform:  } 2025-08-29 16:36:48.530599 | orchestrator | 16:36:48.526 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-08-29 16:36:48.530613 | orchestrator | 16:36:48.526 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 16:36:48.530626 | orchestrator | 16:36:48.526 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 16:36:48.530637 | orchestrator | 16:36:48.526 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 16:36:48.530649 | orchestrator | 16:36:48.526 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 16:36:48.530660 | orchestrator | 16:36:48.526 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.530691 | orchestrator | 16:36:48.526 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 16:36:48.530703 | orchestrator | 16:36:48.526 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 16:36:48.530714 | orchestrator | 16:36:48.526 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 16:36:48.530725 | orchestrator | 16:36:48.526 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 16:36:48.530740 | orchestrator | 16:36:48.526 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.530751 | orchestrator | 16:36:48.526 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 16:36:48.530762 | orchestrator | 16:36:48.527 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 16:36:48.530779 | orchestrator | 16:36:48.527 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 16:36:48.530802 | orchestrator | 16:36:48.527 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 16:36:48.530813 | orchestrator | 16:36:48.527 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.530824 | orchestrator | 16:36:48.527 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 16:36:48.530905 | orchestrator | 16:36:48.527 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.530918 | orchestrator | 16:36:48.527 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.530929 | orchestrator | 16:36:48.527 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 16:36:48.530941 | orchestrator | 16:36:48.527 STDOUT terraform:  } 2025-08-29 16:36:48.530952 | orchestrator | 16:36:48.527 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.530962 | orchestrator | 16:36:48.527 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 16:36:48.530972 | orchestrator | 16:36:48.527 STDOUT terraform:  } 2025-08-29 16:36:48.530983 | orchestrator | 16:36:48.527 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.530994 | orchestrator | 16:36:48.527 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 16:36:48.531005 | orchestrator | 16:36:48.527 STDOUT terraform:  } 2025-08-29 16:36:48.531015 | orchestrator | 16:36:48.527 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.531027 | orchestrator | 16:36:48.527 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 16:36:48.531037 | orchestrator | 16:36:48.527 STDOUT terraform:  } 2025-08-29 16:36:48.531048 | orchestrator | 16:36:48.527 STDOUT terraform:  + binding (known after apply) 2025-08-29 16:36:48.531060 | orchestrator | 16:36:48.527 STDOUT terraform:  + fixed_ip { 2025-08-29 16:36:48.531071 | orchestrator | 16:36:48.527 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-08-29 16:36:48.531082 | orchestrator | 16:36:48.527 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 16:36:48.531093 | orchestrator | 16:36:48.527 STDOUT terraform:  } 2025-08-29 16:36:48.531100 | orchestrator | 16:36:48.527 STDOUT terraform:  } 2025-08-29 16:36:48.531107 | orchestrator | 16:36:48.527 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-08-29 16:36:48.531114 | orchestrator | 16:36:48.527 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 16:36:48.531122 | orchestrator | 16:36:48.527 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 16:36:48.531128 | orchestrator | 16:36:48.527 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 16:36:48.531136 | orchestrator | 16:36:48.527 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 16:36:48.531142 | orchestrator | 16:36:48.527 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.531149 | orchestrator | 16:36:48.527 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 16:36:48.531155 | orchestrator | 16:36:48.527 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 16:36:48.531162 | orchestrator | 16:36:48.527 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 16:36:48.531188 | orchestrator | 16:36:48.527 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 16:36:48.531195 | orchestrator | 16:36:48.527 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.531202 | orchestrator | 16:36:48.527 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 16:36:48.531208 | orchestrator | 16:36:48.527 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 16:36:48.531215 | orchestrator | 16:36:48.527 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 16:36:48.531222 | orchestrator | 16:36:48.527 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 16:36:48.531229 | orchestrator | 16:36:48.527 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.531235 | orchestrator | 16:36:48.527 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 16:36:48.531242 | orchestrator | 16:36:48.528 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.531249 | orchestrator | 16:36:48.528 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.531261 | orchestrator | 16:36:48.528 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 16:36:48.531271 | orchestrator | 16:36:48.528 STDOUT terraform:  } 2025-08-29 16:36:48.531282 | orchestrator | 16:36:48.528 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.531293 | orchestrator | 16:36:48.528 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 16:36:48.531304 | orchestrator | 16:36:48.528 STDOUT terraform:  } 2025-08-29 16:36:48.531316 | orchestrator | 16:36:48.528 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.531327 | orchestrator | 16:36:48.528 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 16:36:48.531338 | orchestrator | 16:36:48.528 STDOUT terraform:  } 2025-08-29 16:36:48.531349 | orchestrator | 16:36:48.528 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 16:36:48.531360 | orchestrator | 16:36:48.528 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 16:36:48.531367 | orchestrator | 16:36:48.528 STDOUT terraform:  } 2025-08-29 16:36:48.531374 | orchestrator | 16:36:48.528 STDOUT terraform:  + binding (known after apply) 2025-08-29 16:36:48.531380 | orchestrator | 16:36:48.528 STDOUT terraform:  + fixed_ip { 2025-08-29 16:36:48.531387 | orchestrator | 16:36:48.528 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-08-29 16:36:48.531394 | orchestrator | 16:36:48.528 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 16:36:48.531400 | orchestrator | 16:36:48.528 STDOUT terraform:  } 2025-08-29 16:36:48.531407 | orchestrator | 16:36:48.528 STDOUT terraform:  } 2025-08-29 16:36:48.531414 | orchestrator | 16:36:48.528 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-08-29 16:36:48.531421 | orchestrator | 16:36:48.528 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-08-29 16:36:48.531427 | orchestrator | 16:36:48.528 STDOUT terraform:  + force_destroy = false 2025-08-29 16:36:48.531434 | orchestrator | 16:36:48.528 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.531447 | orchestrator | 16:36:48.528 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 16:36:48.531454 | orchestrator | 16:36:48.528 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.531460 | orchestrator | 16:36:48.528 STDOUT terraform:  + router_id = (known after apply) 2025-08-29 16:36:48.531467 | orchestrator | 16:36:48.528 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 16:36:48.531473 | orchestrator | 16:36:48.528 STDOUT terraform:  } 2025-08-29 16:36:48.531480 | orchestrator | 16:36:48.528 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-08-29 16:36:48.531487 | orchestrator | 16:36:48.528 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-08-29 16:36:48.531494 | orchestrator | 16:36:48.528 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 16:36:48.531507 | orchestrator | 16:36:48.528 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.531514 | orchestrator | 16:36:48.528 STDOUT terraform:  + availability_zone_hints = [ 2025-08-29 16:36:48.531524 | orchestrator | 16:36:48.528 STDOUT terraform:  + "nova", 2025-08-29 16:36:48.531531 | orchestrator | 16:36:48.528 STDOUT terraform:  ] 2025-08-29 16:36:48.531538 | orchestrator | 16:36:48.528 STDOUT terraform:  + distributed = (known after apply) 2025-08-29 16:36:48.531544 | orchestrator | 16:36:48.528 STDOUT terraform:  + enable_snat = (known after apply) 2025-08-29 16:36:48.531551 | orchestrator | 16:36:48.528 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-08-29 16:36:48.531567 | orchestrator | 16:36:48.528 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-08-29 16:36:48.531575 | orchestrator | 16:36:48.528 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.531581 | orchestrator | 16:36:48.528 STDOUT terraform:  + name = "testbed" 2025-08-29 16:36:48.531588 | orchestrator | 16:36:48.528 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.531594 | orchestrator | 16:36:48.528 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.531601 | orchestrator | 16:36:48.528 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-08-29 16:36:48.531607 | orchestrator | 16:36:48.528 STDOUT terraform:  } 2025-08-29 16:36:48.531614 | orchestrator | 16:36:48.528 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-08-29 16:36:48.531621 | orchestrator | 16:36:48.529 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-08-29 16:36:48.531628 | orchestrator | 16:36:48.529 STDOUT terraform:  + description = "ssh" 2025-08-29 16:36:48.531635 | orchestrator | 16:36:48.529 STDOUT terraform:  + direction = "ingress" 2025-08-29 16:36:48.531641 | orchestrator | 16:36:48.529 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 16:36:48.531648 | orchestrator | 16:36:48.529 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.531655 | orchestrator | 16:36:48.529 STDOUT terraform:  + port_range_max = 22 2025-08-29 16:36:48.531661 | orchestrator | 16:36:48.529 STDOUT terraform:  + port_range_min = 22 2025-08-29 16:36:48.531674 | orchestrator | 16:36:48.529 STDOUT terraform:  + protocol = "tcp" 2025-08-29 16:36:48.531681 | orchestrator | 16:36:48.529 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.531687 | orchestrator | 16:36:48.529 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 16:36:48.531694 | orchestrator | 16:36:48.529 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 16:36:48.531700 | orchestrator | 16:36:48.529 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 16:36:48.531707 | orchestrator | 16:36:48.529 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 16:36:48.531713 | orchestrator | 16:36:48.529 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.531720 | orchestrator | 16:36:48.529 STDOUT terraform:  } 2025-08-29 16:36:48.531729 | orchestrator | 16:36:48.529 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-08-29 16:36:48.531740 | orchestrator | 16:36:48.529 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-08-29 16:36:48.531751 | orchestrator | 16:36:48.529 STDOUT terraform:  + description = "wireguard" 2025-08-29 16:36:48.531763 | orchestrator | 16:36:48.529 STDOUT terraform:  + direction = "ingress" 2025-08-29 16:36:48.531773 | orchestrator | 16:36:48.529 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 16:36:48.531784 | orchestrator | 16:36:48.529 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.531801 | orchestrator | 16:36:48.529 STDOUT terraform:  + port_range_max = 51820 2025-08-29 16:36:48.531812 | orchestrator | 16:36:48.529 STDOUT terraform:  + port_range_min = 51820 2025-08-29 16:36:48.531822 | orchestrator | 16:36:48.529 STDOUT terraform:  + protocol = "udp" 2025-08-29 16:36:48.531856 | orchestrator | 16:36:48.529 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.531868 | orchestrator | 16:36:48.529 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 16:36:48.531879 | orchestrator | 16:36:48.529 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 16:36:48.531890 | orchestrator | 16:36:48.529 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 16:36:48.531906 | orchestrator | 16:36:48.529 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 16:36:48.531916 | orchestrator | 16:36:48.529 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.531926 | orchestrator | 16:36:48.529 STDOUT terraform:  } 2025-08-29 16:36:48.531936 | orchestrator | 16:36:48.529 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-08-29 16:36:48.531946 | orchestrator | 16:36:48.529 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-08-29 16:36:48.531957 | orchestrator | 16:36:48.529 STDOUT terraform:  + direction = "ingress" 2025-08-29 16:36:48.531969 | orchestrator | 16:36:48.530 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 16:36:48.531988 | orchestrator | 16:36:48.530 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.531999 | orchestrator | 16:36:48.530 STDOUT terraform:  + protocol = "tcp" 2025-08-29 16:36:48.532010 | orchestrator | 16:36:48.530 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.532020 | orchestrator | 16:36:48.530 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 16:36:48.532032 | orchestrator | 16:36:48.530 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 16:36:48.532043 | orchestrator | 16:36:48.530 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-08-29 16:36:48.532054 | orchestrator | 16:36:48.530 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 16:36:48.532064 | orchestrator | 16:36:48.530 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.532076 | orchestrator | 16:36:48.530 STDOUT terraform:  } 2025-08-29 16:36:48.532086 | orchestrator | 16:36:48.530 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-08-29 16:36:48.532097 | orchestrator | 16:36:48.530 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-08-29 16:36:48.532108 | orchestrator | 16:36:48.530 STDOUT terraform:  + direction = "ingress" 2025-08-29 16:36:48.532119 | orchestrator | 16:36:48.530 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 16:36:48.532130 | orchestrator | 16:36:48.530 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.532141 | orchestrator | 16:36:48.530 STDOUT terraform:  + protocol = "udp" 2025-08-29 16:36:48.532151 | orchestrator | 16:36:48.530 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.532163 | orchestrator | 16:36:48.530 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 16:36:48.532175 | orchestrator | 16:36:48.530 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 16:36:48.532185 | orchestrator | 16:36:48.530 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-08-29 16:36:48.532196 | orchestrator | 16:36:48.530 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 16:36:48.532206 | orchestrator | 16:36:48.530 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.532212 | orchestrator | 16:36:48.530 STDOUT terraform:  } 2025-08-29 16:36:48.532227 | orchestrator | 16:36:48.530 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-08-29 16:36:48.532234 | orchestrator | 16:36:48.530 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-08-29 16:36:48.532240 | orchestrator | 16:36:48.530 STDOUT terraform:  + direction = "ingress" 2025-08-29 16:36:48.532247 | orchestrator | 16:36:48.530 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 16:36:48.532253 | orchestrator | 16:36:48.530 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.532260 | orchestrator | 16:36:48.530 STDOUT terraform:  + protocol = "icmp" 2025-08-29 16:36:48.532279 | orchestrator | 16:36:48.530 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.532286 | orchestrator | 16:36:48.530 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 16:36:48.532293 | orchestrator | 16:36:48.530 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 16:36:48.532300 | orchestrator | 16:36:48.531 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 16:36:48.532306 | orchestrator | 16:36:48.531 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 16:36:48.532313 | orchestrator | 16:36:48.531 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.532319 | orchestrator | 16:36:48.531 STDOUT terraform:  } 2025-08-29 16:36:48.532326 | orchestrator | 16:36:48.531 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-08-29 16:36:48.532333 | orchestrator | 16:36:48.531 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-08-29 16:36:48.532339 | orchestrator | 16:36:48.531 STDOUT terraform:  + direction = "ingress" 2025-08-29 16:36:48.532346 | orchestrator | 16:36:48.531 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 16:36:48.532352 | orchestrator | 16:36:48.531 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.532359 | orchestrator | 16:36:48.532 STDOUT terraform:  + protocol = "tcp" 2025-08-29 16:36:48.532365 | orchestrator | 16:36:48.532 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.532372 | orchestrator | 16:36:48.532 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 16:36:48.532379 | orchestrator | 16:36:48.532 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 16:36:48.532385 | orchestrator | 16:36:48.532 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 16:36:48.532392 | orchestrator | 16:36:48.532 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 16:36:48.532402 | orchestrator | 16:36:48.532 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.532409 | orchestrator | 16:36:48.532 STDOUT terraform:  } 2025-08-29 16:36:48.532416 | orchestrator | 16:36:48.532 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-08-29 16:36:48.532422 | orchestrator | 16:36:48.532 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-08-29 16:36:48.532429 | orchestrator | 16:36:48.532 STDOUT terraform:  + direction = "ingress" 2025-08-29 16:36:48.532435 | orchestrator | 16:36:48.532 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 16:36:48.532445 | orchestrator | 16:36:48.532 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.532452 | orchestrator | 16:36:48.532 STDOUT terraform:  + protocol = "udp" 2025-08-29 16:36:48.532501 | orchestrator | 16:36:48.532 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.532512 | orchestrator | 16:36:48.532 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 16:36:48.532542 | orchestrator | 16:36:48.532 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 16:36:48.532578 | orchestrator | 16:36:48.532 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 16:36:48.532607 | orchestrator | 16:36:48.532 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 16:36:48.532642 | orchestrator | 16:36:48.532 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.532652 | orchestrator | 16:36:48.532 STDOUT terraform:  } 2025-08-29 16:36:48.532698 | orchestrator | 16:36:48.532 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-08-29 16:36:48.532898 | orchestrator | 16:36:48.532 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-08-29 16:36:48.532949 | orchestrator | 16:36:48.532 STDOUT terraform:  + direction = "ingress" 2025-08-29 16:36:48.532969 | orchestrator | 16:36:48.532 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 16:36:48.532974 | orchestrator | 16:36:48.532 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.532978 | orchestrator | 16:36:48.532 STDOUT terraform:  + protocol = "icmp" 2025-08-29 16:36:48.532988 | orchestrator | 16:36:48.532 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.532992 | orchestrator | 16:36:48.532 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 16:36:48.532996 | orchestrator | 16:36:48.532 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 16:36:48.532999 | orchestrator | 16:36:48.532 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 16:36:48.533005 | orchestrator | 16:36:48.532 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 16:36:48.533041 | orchestrator | 16:36:48.533 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.533048 | orchestrator | 16:36:48.533 STDOUT terraform:  } 2025-08-29 16:36:48.533102 | orchestrator | 16:36:48.533 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-08-29 16:36:48.533149 | orchestrator | 16:36:48.533 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-08-29 16:36:48.533176 | orchestrator | 16:36:48.533 STDOUT terraform:  + description = "vrrp" 2025-08-29 16:36:48.533204 | orchestrator | 16:36:48.533 STDOUT terraform:  + direction = "ingress" 2025-08-29 16:36:48.533247 | orchestrator | 16:36:48.533 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 16:36:48.533290 | orchestrator | 16:36:48.533 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.533297 | orchestrator | 16:36:48.533 STDOUT terraform:  + protocol = "112" 2025-08-29 16:36:48.533308 | orchestrator | 16:36:48.533 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.533343 | orchestrator | 16:36:48.533 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 16:36:48.533377 | orchestrator | 16:36:48.533 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 16:36:48.533404 | orchestrator | 16:36:48.533 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 16:36:48.533439 | orchestrator | 16:36:48.533 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 16:36:48.533476 | orchestrator | 16:36:48.533 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.533482 | orchestrator | 16:36:48.533 STDOUT terraform:  } 2025-08-29 16:36:48.533532 | orchestrator | 16:36:48.533 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-08-29 16:36:48.533580 | orchestrator | 16:36:48.533 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-08-29 16:36:48.533617 | orchestrator | 16:36:48.533 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.533644 | orchestrator | 16:36:48.533 STDOUT terraform:  + description = "management security group" 2025-08-29 16:36:48.533704 | orchestrator | 16:36:48.533 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.533732 | orchestrator | 16:36:48.533 STDOUT terraform:  + name = "testbed-management" 2025-08-29 16:36:48.533761 | orchestrator | 16:36:48.533 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.533788 | orchestrator | 16:36:48.533 STDOUT terraform:  + stateful = (known after apply) 2025-08-29 16:36:48.533815 | orchestrator | 16:36:48.533 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.533823 | orchestrator | 16:36:48.533 STDOUT terraform:  } 2025-08-29 16:36:48.533880 | orchestrator | 16:36:48.533 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-08-29 16:36:48.533925 | orchestrator | 16:36:48.533 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-08-29 16:36:48.533955 | orchestrator | 16:36:48.533 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.533978 | orchestrator | 16:36:48.533 STDOUT terraform:  + description = "node security group" 2025-08-29 16:36:48.534005 | orchestrator | 16:36:48.533 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.534067 | orchestrator | 16:36:48.534 STDOUT terraform:  + name = "testbed-node" 2025-08-29 16:36:48.534074 | orchestrator | 16:36:48.534 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.534092 | orchestrator | 16:36:48.534 STDOUT terraform:  + stateful = (known after apply) 2025-08-29 16:36:48.534121 | orchestrator | 16:36:48.534 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.534127 | orchestrator | 16:36:48.534 STDOUT terraform:  } 2025-08-29 16:36:48.534176 | orchestrator | 16:36:48.534 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-08-29 16:36:48.534219 | orchestrator | 16:36:48.534 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-08-29 16:36:48.534249 | orchestrator | 16:36:48.534 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 16:36:48.534278 | orchestrator | 16:36:48.534 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-08-29 16:36:48.534299 | orchestrator | 16:36:48.534 STDOUT terraform:  + dns_nameservers = [ 2025-08-29 16:36:48.534325 | orchestrator | 16:36:48.534 STDOUT terraform:  + "8.8.8.8", 2025-08-29 16:36:48.534330 | orchestrator | 16:36:48.534 STDOUT terraform:  + "9.9.9.9", 2025-08-29 16:36:48.534342 | orchestrator | 16:36:48.534 STDOUT terraform:  ] 2025-08-29 16:36:48.534361 | orchestrator | 16:36:48.534 STDOUT terraform:  + enable_dhcp = true 2025-08-29 16:36:48.534387 | orchestrator | 16:36:48.534 STDOUT terraform:  + gateway_ip = (known after apply) 2025-08-29 16:36:48.534418 | orchestrator | 16:36:48.534 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.534442 | orchestrator | 16:36:48.534 STDOUT terraform:  + ip_version = 4 2025-08-29 16:36:48.534471 | orchestrator | 16:36:48.534 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-08-29 16:36:48.534500 | orchestrator | 16:36:48.534 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-08-29 16:36:48.534535 | orchestrator | 16:36:48.534 STDOUT terraform:  + name = "subnet-testbed-management" 2025-08-29 16:36:48.534566 | orchestrator | 16:36:48.534 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 16:36:48.534584 | orchestrator | 16:36:48.534 STDOUT terraform:  + no_gateway = false 2025-08-29 16:36:48.534614 | orchestrator | 16:36:48.534 STDOUT terraform:  + region = (known after apply) 2025-08-29 16:36:48.534643 | orchestrator | 16:36:48.534 STDOUT terraform:  + service_types = (known after apply) 2025-08-29 16:36:48.534671 | orchestrator | 16:36:48.534 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 16:36:48.534689 | orchestrator | 16:36:48.534 STDOUT terraform:  + allocation_pool { 2025-08-29 16:36:48.534712 | orchestrator | 16:36:48.534 STDOUT terraform:  + end = "192.168.31.250" 2025-08-29 16:36:48.534735 | orchestrator | 16:36:48.534 STDOUT terraform:  + start = "192.168.31.200" 2025-08-29 16:36:48.534748 | orchestrator | 16:36:48.534 STDOUT terraform:  } 2025-08-29 16:36:48.534761 | orchestrator | 16:36:48.534 STDOUT terraform:  } 2025-08-29 16:36:48.534785 | orchestrator | 16:36:48.534 STDOUT terraform:  # terraform_data.image will be created 2025-08-29 16:36:48.534809 | orchestrator | 16:36:48.534 STDOUT terraform:  + resource "terraform_data" "image" { 2025-08-29 16:36:48.534853 | orchestrator | 16:36:48.534 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.534860 | orchestrator | 16:36:48.534 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-08-29 16:36:48.534881 | orchestrator | 16:36:48.534 STDOUT terraform:  + output = (known after apply) 2025-08-29 16:36:48.534888 | orchestrator | 16:36:48.534 STDOUT terraform:  } 2025-08-29 16:36:48.534918 | orchestrator | 16:36:48.534 STDOUT terraform:  # terraform_data.image_node will be created 2025-08-29 16:36:48.534949 | orchestrator | 16:36:48.534 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-08-29 16:36:48.534967 | orchestrator | 16:36:48.534 STDOUT terraform:  + id = (known after apply) 2025-08-29 16:36:48.534987 | orchestrator | 16:36:48.534 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-08-29 16:36:48.535011 | orchestrator | 16:36:48.534 STDOUT terraform:  + output = (known after apply) 2025-08-29 16:36:48.535018 | orchestrator | 16:36:48.535 STDOUT terraform:  } 2025-08-29 16:36:48.535048 | orchestrator | 16:36:48.535 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-08-29 16:36:48.535055 | orchestrator | 16:36:48.535 STDOUT terraform: Changes to Outputs: 2025-08-29 16:36:48.535081 | orchestrator | 16:36:48.535 STDOUT terraform:  + manager_address = (sensitive value) 2025-08-29 16:36:48.535104 | orchestrator | 16:36:48.535 STDOUT terraform:  + private_key = (sensitive value) 2025-08-29 16:36:48.649574 | orchestrator | 16:36:48.649 STDOUT terraform: terraform_data.image_node: Creating... 2025-08-29 16:36:48.649661 | orchestrator | 16:36:48.649 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=85aeb5f0-93f9-4a66-7ce6-cf31a4a6d9aa] 2025-08-29 16:36:48.711687 | orchestrator | 16:36:48.711 STDOUT terraform: terraform_data.image: Creating... 2025-08-29 16:36:48.711801 | orchestrator | 16:36:48.711 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=03c91c2d-936f-b5cc-43a2-2ac6721f6bd5] 2025-08-29 16:36:48.734526 | orchestrator | 16:36:48.734 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-08-29 16:36:48.736131 | orchestrator | 16:36:48.735 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-08-29 16:36:48.740090 | orchestrator | 16:36:48.739 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-08-29 16:36:48.745563 | orchestrator | 16:36:48.744 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-08-29 16:36:48.745609 | orchestrator | 16:36:48.744 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-08-29 16:36:48.745616 | orchestrator | 16:36:48.744 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-08-29 16:36:48.746151 | orchestrator | 16:36:48.746 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-08-29 16:36:48.747098 | orchestrator | 16:36:48.747 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-08-29 16:36:48.749017 | orchestrator | 16:36:48.748 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-08-29 16:36:48.749518 | orchestrator | 16:36:48.749 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-08-29 16:36:49.180786 | orchestrator | 16:36:49.178 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-08-29 16:36:49.184728 | orchestrator | 16:36:49.184 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-08-29 16:36:49.187978 | orchestrator | 16:36:49.187 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-08-29 16:36:49.197545 | orchestrator | 16:36:49.196 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-08-29 16:36:49.228011 | orchestrator | 16:36:49.227 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-08-29 16:36:49.232810 | orchestrator | 16:36:49.232 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-08-29 16:36:50.335365 | orchestrator | 16:36:50.335 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=88fb3c85-6f07-4c48-bd32-c9588c6006b5] 2025-08-29 16:36:50.352272 | orchestrator | 16:36:50.351 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-08-29 16:36:52.376967 | orchestrator | 16:36:52.376 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=d9d2a167-7162-4dbb-9ae3-4acc9c24be60] 2025-08-29 16:36:52.385376 | orchestrator | 16:36:52.385 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-08-29 16:36:52.391657 | orchestrator | 16:36:52.391 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=f77776fc-d93a-4a3c-99b5-8bf2997ddc70] 2025-08-29 16:36:52.405809 | orchestrator | 16:36:52.405 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-08-29 16:36:52.405940 | orchestrator | 16:36:52.405 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=95e03a76-aedc-4db5-a6e6-17eb5f40fbcd] 2025-08-29 16:36:52.412911 | orchestrator | 16:36:52.412 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-08-29 16:36:52.430664 | orchestrator | 16:36:52.430 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=2fca4086-decf-4125-8890-d41999d174b7] 2025-08-29 16:36:52.432153 | orchestrator | 16:36:52.431 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=82706033-d7aa-4ff6-a1c3-b9e917369a8a] 2025-08-29 16:36:52.438921 | orchestrator | 16:36:52.438 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-08-29 16:36:52.439471 | orchestrator | 16:36:52.439 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-08-29 16:36:52.485213 | orchestrator | 16:36:52.484 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=822f3ffb-f416-492b-baa5-9c709b0e03df] 2025-08-29 16:36:52.489328 | orchestrator | 16:36:52.488 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=fde2b913-b6a3-4677-89e6-f2f4f6c968dc] 2025-08-29 16:36:52.494330 | orchestrator | 16:36:52.494 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-08-29 16:36:52.503925 | orchestrator | 16:36:52.503 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-08-29 16:36:52.504011 | orchestrator | 16:36:52.503 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=a2504ec8-b4e2-4484-b80f-e6a3be658c85] 2025-08-29 16:36:52.511041 | orchestrator | 16:36:52.510 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=26ab6eb90a58da4684c449e5b530a785f1f34744] 2025-08-29 16:36:52.516755 | orchestrator | 16:36:52.516 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-08-29 16:36:52.517756 | orchestrator | 16:36:52.517 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-08-29 16:36:52.519040 | orchestrator | 16:36:52.518 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=4c7a21bb-a31e-489d-9d8e-9d9677eceddb] 2025-08-29 16:36:52.520906 | orchestrator | 16:36:52.520 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=b402271929d30f507069dcba80675f0b9e916734] 2025-08-29 16:36:53.393953 | orchestrator | 16:36:53.393 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 0s [id=b2593293-569e-4a83-b0b4-085b76956f99] 2025-08-29 16:36:53.401221 | orchestrator | 16:36:53.400 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-08-29 16:36:53.690090 | orchestrator | 16:36:53.689 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=201936ad-eacc-472c-a085-9d8dab0e0a92] 2025-08-29 16:36:55.783158 | orchestrator | 16:36:55.782 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=25b9d25e-6018-4201-98d7-97beb0a1ade7] 2025-08-29 16:36:55.815692 | orchestrator | 16:36:55.814 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=010978aa-1cb4-4e02-9662-44bb4ec64585] 2025-08-29 16:36:55.816473 | orchestrator | 16:36:55.816 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=e5d8ebda-5b30-469e-93ac-66a3d76111dd] 2025-08-29 16:36:55.822084 | orchestrator | 16:36:55.821 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-08-29 16:36:55.822728 | orchestrator | 16:36:55.822 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-08-29 16:36:55.823528 | orchestrator | 16:36:55.823 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-08-29 16:36:55.842383 | orchestrator | 16:36:55.842 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=a4dbedbb-5d3a-43c0-9006-369b69148286] 2025-08-29 16:36:55.844275 | orchestrator | 16:36:55.843 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=16b144c0-621b-45de-9e0b-661fe8ca3416] 2025-08-29 16:36:55.848670 | orchestrator | 16:36:55.848 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=a9f3ec05-5588-4818-9e9a-1cb35db5ebc3] 2025-08-29 16:36:55.892517 | orchestrator | 16:36:55.892 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=473b7367-b912-4e95-8afb-454ed8e49115] 2025-08-29 16:36:56.011687 | orchestrator | 16:36:56.010 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=b3788a26-8fa5-462c-9861-8517e2f6fe6e] 2025-08-29 16:36:56.028574 | orchestrator | 16:36:56.028 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-08-29 16:36:56.028709 | orchestrator | 16:36:56.028 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-08-29 16:36:56.028901 | orchestrator | 16:36:56.028 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-08-29 16:36:56.030761 | orchestrator | 16:36:56.030 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-08-29 16:36:56.032319 | orchestrator | 16:36:56.032 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-08-29 16:36:56.033435 | orchestrator | 16:36:56.033 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-08-29 16:36:56.175518 | orchestrator | 16:36:56.175 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=342d41c1-6d1a-4073-b579-86a29c94aa51] 2025-08-29 16:36:56.341176 | orchestrator | 16:36:56.340 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=91ab3c33-c566-4738-a4fb-89ce37a43321] 2025-08-29 16:36:56.503901 | orchestrator | 16:36:56.503 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=4ce34fdc-17c7-4550-b33a-b3accf147c73] 2025-08-29 16:36:56.511469 | orchestrator | 16:36:56.511 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-08-29 16:36:56.513531 | orchestrator | 16:36:56.513 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-08-29 16:36:56.513603 | orchestrator | 16:36:56.513 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-08-29 16:36:56.518532 | orchestrator | 16:36:56.518 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-08-29 16:36:56.522681 | orchestrator | 16:36:56.522 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-08-29 16:36:56.549347 | orchestrator | 16:36:56.548 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=4a604308-b536-41ef-a5f5-1024591b9c2a] 2025-08-29 16:36:56.564934 | orchestrator | 16:36:56.564 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-08-29 16:36:56.787256 | orchestrator | 16:36:56.786 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=5f739d68-543c-4d8d-b247-96c05dcbdd71] 2025-08-29 16:36:56.796502 | orchestrator | 16:36:56.796 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=7c77e182-dd56-4888-b541-56dc6aa0c521] 2025-08-29 16:36:56.801335 | orchestrator | 16:36:56.801 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-08-29 16:36:56.808149 | orchestrator | 16:36:56.807 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-08-29 16:36:56.834891 | orchestrator | 16:36:56.834 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=35102cfe-6915-4914-af33-3ce361112032] 2025-08-29 16:36:56.847221 | orchestrator | 16:36:56.847 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-08-29 16:36:56.991701 | orchestrator | 16:36:56.991 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=9835d451-f5c7-4866-b739-3878e70e7557] 2025-08-29 16:36:57.013013 | orchestrator | 16:36:57.012 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-08-29 16:36:57.059058 | orchestrator | 16:36:57.058 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=9a205300-4a8a-43f4-9668-538ff776bee4] 2025-08-29 16:36:57.347164 | orchestrator | 16:36:57.346 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 0s [id=30143de4-94e9-49ad-ae4b-7b3450f3e124] 2025-08-29 16:36:57.393623 | orchestrator | 16:36:57.393 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=c43d0254-a98c-408f-90a2-2323b2b73318] 2025-08-29 16:36:57.432063 | orchestrator | 16:36:57.431 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=5dee14bd-fbaa-4a4e-8629-a3788821ead7] 2025-08-29 16:36:57.577278 | orchestrator | 16:36:57.576 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=366ac89f-c0fc-403d-a506-57ce2a2bbb8a] 2025-08-29 16:36:57.618556 | orchestrator | 16:36:57.618 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=5c5caba2-67c9-40f9-9533-9f1a1c695d84] 2025-08-29 16:36:57.731352 | orchestrator | 16:36:57.731 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=d15138ed-08f0-4123-99a3-77cabcf348f8] 2025-08-29 16:36:57.799860 | orchestrator | 16:36:57.799 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=5dda665c-18ce-41a4-9df1-8032986445a0] 2025-08-29 16:36:58.219630 | orchestrator | 16:36:58.219 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=e8297d23-cd7c-458c-845b-45cdb992905c] 2025-08-29 16:36:58.223763 | orchestrator | 16:36:58.223 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=60fbfd15-926a-443a-aa10-71e936a39df1] 2025-08-29 16:36:58.246765 | orchestrator | 16:36:58.246 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-08-29 16:36:58.267392 | orchestrator | 16:36:58.267 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-08-29 16:36:58.267792 | orchestrator | 16:36:58.267 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-08-29 16:36:58.268137 | orchestrator | 16:36:58.268 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-08-29 16:36:58.278364 | orchestrator | 16:36:58.278 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-08-29 16:36:58.280994 | orchestrator | 16:36:58.280 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-08-29 16:36:58.283098 | orchestrator | 16:36:58.282 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-08-29 16:37:00.098472 | orchestrator | 16:37:00.098 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=bb98076a-13ed-4e30-811f-f54c2bda69a0] 2025-08-29 16:37:00.109673 | orchestrator | 16:37:00.109 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-08-29 16:37:00.114630 | orchestrator | 16:37:00.114 STDOUT terraform: local_file.inventory: Creating... 2025-08-29 16:37:00.120954 | orchestrator | 16:37:00.120 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-08-29 16:37:00.122682 | orchestrator | 16:37:00.122 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=9f8acb8a293965112e307ca74569e631c6805eb8] 2025-08-29 16:37:00.126423 | orchestrator | 16:37:00.126 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=0089465221188187bb585f007c1d02c7ee723074] 2025-08-29 16:37:00.804509 | orchestrator | 16:37:00.804 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=bb98076a-13ed-4e30-811f-f54c2bda69a0] 2025-08-29 16:37:08.268582 | orchestrator | 16:37:08.268 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-08-29 16:37:08.268699 | orchestrator | 16:37:08.268 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-08-29 16:37:08.269434 | orchestrator | 16:37:08.269 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-08-29 16:37:08.285964 | orchestrator | 16:37:08.285 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-08-29 16:37:08.286098 | orchestrator | 16:37:08.285 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-08-29 16:37:08.286118 | orchestrator | 16:37:08.285 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-08-29 16:37:18.269235 | orchestrator | 16:37:18.268 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-08-29 16:37:18.269355 | orchestrator | 16:37:18.269 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-08-29 16:37:18.270038 | orchestrator | 16:37:18.269 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-08-29 16:37:18.286392 | orchestrator | 16:37:18.286 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-08-29 16:37:18.286487 | orchestrator | 16:37:18.286 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-08-29 16:37:18.286650 | orchestrator | 16:37:18.286 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-08-29 16:37:18.603915 | orchestrator | 16:37:18.603 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=ed181bb7-29f2-46b4-bf3f-2a82ad45d97f] 2025-08-29 16:37:18.660926 | orchestrator | 16:37:18.660 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=10c3de53-9b0a-4f5d-ae94-fd4bd8a28483] 2025-08-29 16:37:18.685678 | orchestrator | 16:37:18.685 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=5defbe92-0e6b-4abc-b240-bc0a1671e7fd] 2025-08-29 16:37:28.269702 | orchestrator | 16:37:28.269 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-08-29 16:37:28.287075 | orchestrator | 16:37:28.286 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-08-29 16:37:28.287179 | orchestrator | 16:37:28.287 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-08-29 16:37:28.859571 | orchestrator | 16:37:28.859 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=0b9a0e29-727f-4105-9e66-27bab7b9a395] 2025-08-29 16:37:28.872988 | orchestrator | 16:37:28.872 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=35b44690-ed81-4f63-ba15-ce183904e147] 2025-08-29 16:37:28.945255 | orchestrator | 16:37:28.944 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=88a6d10c-0156-4d2a-80bb-a12f20456695] 2025-08-29 16:37:28.974561 | orchestrator | 16:37:28.974 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-08-29 16:37:28.977246 | orchestrator | 16:37:28.977 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-08-29 16:37:28.980274 | orchestrator | 16:37:28.980 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-08-29 16:37:28.985081 | orchestrator | 16:37:28.984 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-08-29 16:37:28.986234 | orchestrator | 16:37:28.985 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-08-29 16:37:28.987624 | orchestrator | 16:37:28.987 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-08-29 16:37:28.989440 | orchestrator | 16:37:28.989 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-08-29 16:37:28.990378 | orchestrator | 16:37:28.990 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-08-29 16:37:28.990646 | orchestrator | 16:37:28.990 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-08-29 16:37:28.999832 | orchestrator | 16:37:28.999 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=2778390585341302098] 2025-08-29 16:37:29.009556 | orchestrator | 16:37:29.009 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-08-29 16:37:29.029918 | orchestrator | 16:37:29.029 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-08-29 16:37:32.351822 | orchestrator | 16:37:32.351 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=35b44690-ed81-4f63-ba15-ce183904e147/d9d2a167-7162-4dbb-9ae3-4acc9c24be60] 2025-08-29 16:37:32.382132 | orchestrator | 16:37:32.381 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=88a6d10c-0156-4d2a-80bb-a12f20456695/f77776fc-d93a-4a3c-99b5-8bf2997ddc70] 2025-08-29 16:37:32.419333 | orchestrator | 16:37:32.418 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=ed181bb7-29f2-46b4-bf3f-2a82ad45d97f/95e03a76-aedc-4db5-a6e6-17eb5f40fbcd] 2025-08-29 16:37:32.447269 | orchestrator | 16:37:32.446 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=35b44690-ed81-4f63-ba15-ce183904e147/822f3ffb-f416-492b-baa5-9c709b0e03df] 2025-08-29 16:37:32.468956 | orchestrator | 16:37:32.468 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=88a6d10c-0156-4d2a-80bb-a12f20456695/4c7a21bb-a31e-489d-9d8e-9d9677eceddb] 2025-08-29 16:37:32.584039 | orchestrator | 16:37:32.583 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=ed181bb7-29f2-46b4-bf3f-2a82ad45d97f/2fca4086-decf-4125-8890-d41999d174b7] 2025-08-29 16:37:38.566960 | orchestrator | 16:37:38.566 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=35b44690-ed81-4f63-ba15-ce183904e147/fde2b913-b6a3-4677-89e6-f2f4f6c968dc] 2025-08-29 16:37:38.582737 | orchestrator | 16:37:38.582 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=88a6d10c-0156-4d2a-80bb-a12f20456695/a2504ec8-b4e2-4484-b80f-e6a3be658c85] 2025-08-29 16:37:38.596721 | orchestrator | 16:37:38.596 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=ed181bb7-29f2-46b4-bf3f-2a82ad45d97f/82706033-d7aa-4ff6-a1c3-b9e917369a8a] 2025-08-29 16:37:39.034254 | orchestrator | 16:37:39.033 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-08-29 16:37:49.035173 | orchestrator | 16:37:49.034 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-08-29 16:37:49.352239 | orchestrator | 16:37:49.351 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=956b78f2-c9e7-4bc9-9549-a00ef4a575fa] 2025-08-29 16:37:49.377948 | orchestrator | 16:37:49.377 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-08-29 16:37:49.378075 | orchestrator | 16:37:49.377 STDOUT terraform: Outputs: 2025-08-29 16:37:49.378119 | orchestrator | 16:37:49.377 STDOUT terraform: manager_address = 2025-08-29 16:37:49.378129 | orchestrator | 16:37:49.377 STDOUT terraform: private_key = 2025-08-29 16:37:49.788837 | orchestrator | ok: Runtime: 0:01:07.209819 2025-08-29 16:37:49.823679 | 2025-08-29 16:37:49.823804 | TASK [Create infrastructure (stable)] 2025-08-29 16:37:50.357049 | orchestrator | skipping: Conditional result was False 2025-08-29 16:37:50.374891 | 2025-08-29 16:37:50.375092 | TASK [Fetch manager address] 2025-08-29 16:37:50.810089 | orchestrator | ok 2025-08-29 16:37:50.820753 | 2025-08-29 16:37:50.820878 | TASK [Set manager_host address] 2025-08-29 16:37:50.890487 | orchestrator | ok 2025-08-29 16:37:50.900526 | 2025-08-29 16:37:50.900654 | LOOP [Update ansible collections] 2025-08-29 16:37:51.701233 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 16:37:51.701702 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-08-29 16:37:51.701760 | orchestrator | Starting galaxy collection install process 2025-08-29 16:37:51.701797 | orchestrator | Process install dependency map 2025-08-29 16:37:51.701830 | orchestrator | Starting collection install process 2025-08-29 16:37:51.701861 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2025-08-29 16:37:51.701897 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2025-08-29 16:37:51.701934 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-08-29 16:37:51.702015 | orchestrator | ok: Item: commons Runtime: 0:00:00.501959 2025-08-29 16:37:52.512090 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-08-29 16:37:52.512281 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 16:37:52.512331 | orchestrator | Starting galaxy collection install process 2025-08-29 16:37:52.512371 | orchestrator | Process install dependency map 2025-08-29 16:37:52.512490 | orchestrator | Starting collection install process 2025-08-29 16:37:52.512525 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2025-08-29 16:37:52.512560 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2025-08-29 16:37:52.512594 | orchestrator | osism.services:999.0.0 was installed successfully 2025-08-29 16:37:52.512645 | orchestrator | ok: Item: services Runtime: 0:00:00.556640 2025-08-29 16:37:52.534233 | 2025-08-29 16:37:52.534439 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-08-29 16:38:03.102733 | orchestrator | ok 2025-08-29 16:38:03.122199 | 2025-08-29 16:38:03.122420 | TASK [Wait a little longer for the manager so that everything is ready] 2025-08-29 16:39:03.168848 | orchestrator | ok 2025-08-29 16:39:03.180936 | 2025-08-29 16:39:03.181083 | TASK [Fetch manager ssh hostkey] 2025-08-29 16:39:04.758490 | orchestrator | Output suppressed because no_log was given 2025-08-29 16:39:04.775709 | 2025-08-29 16:39:04.775886 | TASK [Get ssh keypair from terraform environment] 2025-08-29 16:39:05.313299 | orchestrator | ok: Runtime: 0:00:00.009622 2025-08-29 16:39:05.328173 | 2025-08-29 16:39:05.328320 | TASK [Point out that the following task takes some time and does not give any output] 2025-08-29 16:39:05.379575 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-08-29 16:39:05.391931 | 2025-08-29 16:39:05.392096 | TASK [Run manager part 0] 2025-08-29 16:39:06.233987 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 16:39:06.273331 | orchestrator | 2025-08-29 16:39:06.273361 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-08-29 16:39:06.273367 | orchestrator | 2025-08-29 16:39:06.273376 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-08-29 16:39:08.133579 | orchestrator | ok: [testbed-manager] 2025-08-29 16:39:08.133621 | orchestrator | 2025-08-29 16:39:08.133638 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-08-29 16:39:08.133647 | orchestrator | 2025-08-29 16:39:08.133655 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 16:39:10.043335 | orchestrator | ok: [testbed-manager] 2025-08-29 16:39:10.043376 | orchestrator | 2025-08-29 16:39:10.043383 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-08-29 16:39:10.715103 | orchestrator | ok: [testbed-manager] 2025-08-29 16:39:10.715168 | orchestrator | 2025-08-29 16:39:10.715177 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-08-29 16:39:10.764353 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:39:10.764415 | orchestrator | 2025-08-29 16:39:10.764425 | orchestrator | TASK [Update package cache] **************************************************** 2025-08-29 16:39:10.802284 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:39:10.802348 | orchestrator | 2025-08-29 16:39:10.802357 | orchestrator | TASK [Install required packages] *********************************************** 2025-08-29 16:39:10.831883 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:39:10.831977 | orchestrator | 2025-08-29 16:39:10.831993 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-08-29 16:39:10.861752 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:39:10.861818 | orchestrator | 2025-08-29 16:39:10.861825 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-08-29 16:39:10.909365 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:39:10.909433 | orchestrator | 2025-08-29 16:39:10.909442 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-08-29 16:39:10.948317 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:39:10.948369 | orchestrator | 2025-08-29 16:39:10.948377 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-08-29 16:39:10.987331 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:39:10.987389 | orchestrator | 2025-08-29 16:39:10.987399 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-08-29 16:39:11.772013 | orchestrator | changed: [testbed-manager] 2025-08-29 16:39:11.772089 | orchestrator | 2025-08-29 16:39:11.772101 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-08-29 16:41:56.786292 | orchestrator | changed: [testbed-manager] 2025-08-29 16:41:56.786450 | orchestrator | 2025-08-29 16:41:56.786484 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-08-29 16:43:51.748646 | orchestrator | changed: [testbed-manager] 2025-08-29 16:43:51.748800 | orchestrator | 2025-08-29 16:43:51.748819 | orchestrator | TASK [Install required packages] *********************************************** 2025-08-29 16:44:20.890311 | orchestrator | changed: [testbed-manager] 2025-08-29 16:44:20.890397 | orchestrator | 2025-08-29 16:44:20.890407 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-08-29 16:44:30.483501 | orchestrator | changed: [testbed-manager] 2025-08-29 16:44:30.483614 | orchestrator | 2025-08-29 16:44:30.483626 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-08-29 16:44:30.528018 | orchestrator | ok: [testbed-manager] 2025-08-29 16:44:30.528097 | orchestrator | 2025-08-29 16:44:30.528110 | orchestrator | TASK [Get current user] ******************************************************** 2025-08-29 16:44:31.339108 | orchestrator | ok: [testbed-manager] 2025-08-29 16:44:31.339245 | orchestrator | 2025-08-29 16:44:31.339266 | orchestrator | TASK [Create venv directory] *************************************************** 2025-08-29 16:44:32.095130 | orchestrator | changed: [testbed-manager] 2025-08-29 16:44:32.095225 | orchestrator | 2025-08-29 16:44:32.095255 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-08-29 16:44:39.160149 | orchestrator | changed: [testbed-manager] 2025-08-29 16:44:39.160189 | orchestrator | 2025-08-29 16:44:39.160212 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-08-29 16:44:45.864639 | orchestrator | changed: [testbed-manager] 2025-08-29 16:44:45.864686 | orchestrator | 2025-08-29 16:44:45.864699 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-08-29 16:44:48.623019 | orchestrator | changed: [testbed-manager] 2025-08-29 16:44:48.623755 | orchestrator | 2025-08-29 16:44:48.623776 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-08-29 16:44:50.553263 | orchestrator | changed: [testbed-manager] 2025-08-29 16:44:50.553352 | orchestrator | 2025-08-29 16:44:50.553374 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-08-29 16:44:51.833759 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-08-29 16:44:51.833800 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-08-29 16:44:51.833806 | orchestrator | 2025-08-29 16:44:51.833812 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-08-29 16:44:51.875606 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-08-29 16:44:51.875687 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-08-29 16:44:51.875697 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-08-29 16:44:51.875704 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-08-29 16:44:55.068883 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-08-29 16:44:55.068983 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-08-29 16:44:55.068997 | orchestrator | 2025-08-29 16:44:55.069023 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-08-29 16:44:55.634537 | orchestrator | changed: [testbed-manager] 2025-08-29 16:44:55.634581 | orchestrator | 2025-08-29 16:44:55.634589 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-08-29 16:45:15.272312 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-08-29 16:45:15.272450 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-08-29 16:45:15.272469 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-08-29 16:45:15.272482 | orchestrator | 2025-08-29 16:45:15.272496 | orchestrator | TASK [Install local collections] *********************************************** 2025-08-29 16:45:17.803323 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-08-29 16:45:17.803456 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-08-29 16:45:17.803470 | orchestrator | 2025-08-29 16:45:17.803481 | orchestrator | PLAY [Create operator user] **************************************************** 2025-08-29 16:45:17.803491 | orchestrator | 2025-08-29 16:45:17.803501 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 16:45:19.274597 | orchestrator | ok: [testbed-manager] 2025-08-29 16:45:19.274784 | orchestrator | 2025-08-29 16:45:19.274806 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-08-29 16:45:19.326072 | orchestrator | ok: [testbed-manager] 2025-08-29 16:45:19.326163 | orchestrator | 2025-08-29 16:45:19.326179 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-08-29 16:45:19.402396 | orchestrator | ok: [testbed-manager] 2025-08-29 16:45:19.402479 | orchestrator | 2025-08-29 16:45:19.402490 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-08-29 16:45:20.219378 | orchestrator | changed: [testbed-manager] 2025-08-29 16:45:20.219473 | orchestrator | 2025-08-29 16:45:20.219489 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-08-29 16:45:20.989559 | orchestrator | changed: [testbed-manager] 2025-08-29 16:45:20.989648 | orchestrator | 2025-08-29 16:45:20.989660 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-08-29 16:45:22.415804 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-08-29 16:45:22.415896 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-08-29 16:45:22.415911 | orchestrator | 2025-08-29 16:45:22.415942 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-08-29 16:45:23.846761 | orchestrator | changed: [testbed-manager] 2025-08-29 16:45:23.846836 | orchestrator | 2025-08-29 16:45:23.846848 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-08-29 16:45:25.764055 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 16:45:25.764156 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-08-29 16:45:25.764171 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-08-29 16:45:25.764183 | orchestrator | 2025-08-29 16:45:25.764196 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-08-29 16:45:25.812128 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:45:25.812229 | orchestrator | 2025-08-29 16:45:25.812254 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-08-29 16:45:26.426577 | orchestrator | changed: [testbed-manager] 2025-08-29 16:45:26.426679 | orchestrator | 2025-08-29 16:45:26.426691 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-08-29 16:45:26.493445 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:45:26.493539 | orchestrator | 2025-08-29 16:45:26.493555 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-08-29 16:45:27.399415 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 16:45:27.399533 | orchestrator | changed: [testbed-manager] 2025-08-29 16:45:27.399561 | orchestrator | 2025-08-29 16:45:27.399580 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-08-29 16:45:27.434923 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:45:27.435012 | orchestrator | 2025-08-29 16:45:27.435025 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-08-29 16:45:27.470260 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:45:27.470364 | orchestrator | 2025-08-29 16:45:27.470389 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-08-29 16:45:27.505978 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:45:27.506166 | orchestrator | 2025-08-29 16:45:27.506183 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-08-29 16:45:27.560112 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:45:27.560207 | orchestrator | 2025-08-29 16:45:27.560224 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-08-29 16:45:28.271338 | orchestrator | ok: [testbed-manager] 2025-08-29 16:45:28.271426 | orchestrator | 2025-08-29 16:45:28.271442 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-08-29 16:45:28.271455 | orchestrator | 2025-08-29 16:45:28.271467 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 16:45:29.768539 | orchestrator | ok: [testbed-manager] 2025-08-29 16:45:29.768577 | orchestrator | 2025-08-29 16:45:29.768583 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-08-29 16:45:30.767707 | orchestrator | changed: [testbed-manager] 2025-08-29 16:45:30.767743 | orchestrator | 2025-08-29 16:45:30.767749 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 16:45:30.767755 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-08-29 16:45:30.767759 | orchestrator | 2025-08-29 16:45:31.175080 | orchestrator | ok: Runtime: 0:06:25.134203 2025-08-29 16:45:31.194011 | 2025-08-29 16:45:31.194162 | TASK [Point out that the log in on the manager is now possible] 2025-08-29 16:45:31.243652 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-08-29 16:45:31.257397 | 2025-08-29 16:45:31.257522 | TASK [Point out that the following task takes some time and does not give any output] 2025-08-29 16:45:31.300290 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-08-29 16:45:31.308299 | 2025-08-29 16:45:31.308410 | TASK [Run manager part 1 + 2] 2025-08-29 16:45:32.144350 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 16:45:32.199868 | orchestrator | 2025-08-29 16:45:32.199921 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-08-29 16:45:32.199928 | orchestrator | 2025-08-29 16:45:32.199942 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 16:45:34.808935 | orchestrator | ok: [testbed-manager] 2025-08-29 16:45:34.809023 | orchestrator | 2025-08-29 16:45:34.809071 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-08-29 16:45:34.842085 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:45:34.842154 | orchestrator | 2025-08-29 16:45:34.842172 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-08-29 16:45:34.878668 | orchestrator | ok: [testbed-manager] 2025-08-29 16:45:34.878727 | orchestrator | 2025-08-29 16:45:34.878735 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 16:45:34.926827 | orchestrator | ok: [testbed-manager] 2025-08-29 16:45:34.926908 | orchestrator | 2025-08-29 16:45:34.926925 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 16:45:34.998459 | orchestrator | ok: [testbed-manager] 2025-08-29 16:45:34.998551 | orchestrator | 2025-08-29 16:45:34.998568 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 16:45:35.055533 | orchestrator | ok: [testbed-manager] 2025-08-29 16:45:35.055659 | orchestrator | 2025-08-29 16:45:35.055681 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 16:45:35.093626 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-08-29 16:45:35.093701 | orchestrator | 2025-08-29 16:45:35.093712 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 16:45:35.847678 | orchestrator | ok: [testbed-manager] 2025-08-29 16:45:35.847758 | orchestrator | 2025-08-29 16:45:35.847773 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 16:45:35.893300 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:45:35.893396 | orchestrator | 2025-08-29 16:45:35.893413 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 16:45:37.328726 | orchestrator | changed: [testbed-manager] 2025-08-29 16:45:37.328887 | orchestrator | 2025-08-29 16:45:37.328904 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 16:45:37.927878 | orchestrator | ok: [testbed-manager] 2025-08-29 16:45:37.927967 | orchestrator | 2025-08-29 16:45:37.927984 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 16:45:39.094095 | orchestrator | changed: [testbed-manager] 2025-08-29 16:45:39.094151 | orchestrator | 2025-08-29 16:45:39.094161 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 16:45:56.543527 | orchestrator | changed: [testbed-manager] 2025-08-29 16:45:56.543663 | orchestrator | 2025-08-29 16:45:56.543687 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-08-29 16:45:57.250645 | orchestrator | ok: [testbed-manager] 2025-08-29 16:45:57.250737 | orchestrator | 2025-08-29 16:45:57.250756 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-08-29 16:45:57.297753 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:45:57.297838 | orchestrator | 2025-08-29 16:45:57.297857 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-08-29 16:45:58.360980 | orchestrator | changed: [testbed-manager] 2025-08-29 16:45:58.361024 | orchestrator | 2025-08-29 16:45:58.361033 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-08-29 16:45:59.395365 | orchestrator | changed: [testbed-manager] 2025-08-29 16:45:59.395462 | orchestrator | 2025-08-29 16:45:59.395478 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-08-29 16:46:00.018826 | orchestrator | changed: [testbed-manager] 2025-08-29 16:46:00.018870 | orchestrator | 2025-08-29 16:46:00.018879 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-08-29 16:46:00.064079 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-08-29 16:46:00.064180 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-08-29 16:46:00.064193 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-08-29 16:46:00.064202 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-08-29 16:46:02.753014 | orchestrator | changed: [testbed-manager] 2025-08-29 16:46:02.753116 | orchestrator | 2025-08-29 16:46:02.753134 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-08-29 16:46:12.600529 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-08-29 16:46:12.600565 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-08-29 16:46:12.600574 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-08-29 16:46:12.600581 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-08-29 16:46:12.600633 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-08-29 16:46:12.600640 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-08-29 16:46:12.600647 | orchestrator | 2025-08-29 16:46:12.600654 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-08-29 16:46:13.625930 | orchestrator | changed: [testbed-manager] 2025-08-29 16:46:13.625996 | orchestrator | 2025-08-29 16:46:13.626009 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-08-29 16:46:13.663682 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:46:13.663727 | orchestrator | 2025-08-29 16:46:13.663739 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-08-29 16:46:16.739688 | orchestrator | changed: [testbed-manager] 2025-08-29 16:46:16.739732 | orchestrator | 2025-08-29 16:46:16.739739 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-08-29 16:46:16.769492 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:46:16.769525 | orchestrator | 2025-08-29 16:46:16.769531 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-08-29 16:47:59.262629 | orchestrator | changed: [testbed-manager] 2025-08-29 16:47:59.262726 | orchestrator | 2025-08-29 16:47:59.262745 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 16:48:00.430506 | orchestrator | ok: [testbed-manager] 2025-08-29 16:48:00.430548 | orchestrator | 2025-08-29 16:48:00.430556 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 16:48:00.430564 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-08-29 16:48:00.430570 | orchestrator | 2025-08-29 16:48:00.967890 | orchestrator | ok: Runtime: 0:02:28.892810 2025-08-29 16:48:00.986518 | 2025-08-29 16:48:00.986721 | TASK [Reboot manager] 2025-08-29 16:48:02.526900 | orchestrator | ok: Runtime: 0:00:00.975795 2025-08-29 16:48:02.544541 | 2025-08-29 16:48:02.544790 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-08-29 16:48:18.962787 | orchestrator | ok 2025-08-29 16:48:18.971037 | 2025-08-29 16:48:18.971155 | TASK [Wait a little longer for the manager so that everything is ready] 2025-08-29 16:49:19.016776 | orchestrator | ok 2025-08-29 16:49:19.026535 | 2025-08-29 16:49:19.026699 | TASK [Deploy manager + bootstrap nodes] 2025-08-29 16:49:21.972738 | orchestrator | 2025-08-29 16:49:21.972944 | orchestrator | # DEPLOY MANAGER 2025-08-29 16:49:21.972968 | orchestrator | 2025-08-29 16:49:21.972983 | orchestrator | + set -e 2025-08-29 16:49:21.972996 | orchestrator | + echo 2025-08-29 16:49:21.973010 | orchestrator | + echo '# DEPLOY MANAGER' 2025-08-29 16:49:21.973028 | orchestrator | + echo 2025-08-29 16:49:21.973081 | orchestrator | + cat /opt/manager-vars.sh 2025-08-29 16:49:21.975952 | orchestrator | export NUMBER_OF_NODES=6 2025-08-29 16:49:21.975995 | orchestrator | 2025-08-29 16:49:21.976008 | orchestrator | export CEPH_VERSION=reef 2025-08-29 16:49:21.976021 | orchestrator | export CONFIGURATION_VERSION=main 2025-08-29 16:49:21.976034 | orchestrator | export MANAGER_VERSION=latest 2025-08-29 16:49:21.976056 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-08-29 16:49:21.976067 | orchestrator | 2025-08-29 16:49:21.976085 | orchestrator | export ARA=false 2025-08-29 16:49:21.976096 | orchestrator | export DEPLOY_MODE=manager 2025-08-29 16:49:21.976114 | orchestrator | export TEMPEST=false 2025-08-29 16:49:21.976126 | orchestrator | export IS_ZUUL=true 2025-08-29 16:49:21.976137 | orchestrator | 2025-08-29 16:49:21.976154 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.184 2025-08-29 16:49:21.976166 | orchestrator | export EXTERNAL_API=false 2025-08-29 16:49:21.976177 | orchestrator | 2025-08-29 16:49:21.976188 | orchestrator | export IMAGE_USER=ubuntu 2025-08-29 16:49:21.976203 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-08-29 16:49:21.976214 | orchestrator | 2025-08-29 16:49:21.976225 | orchestrator | export CEPH_STACK=ceph-ansible 2025-08-29 16:49:21.976241 | orchestrator | 2025-08-29 16:49:21.976253 | orchestrator | + echo 2025-08-29 16:49:21.976265 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 16:49:21.977271 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 16:49:21.977289 | orchestrator | ++ INTERACTIVE=false 2025-08-29 16:49:21.977303 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 16:49:21.977316 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 16:49:21.977331 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 16:49:21.977438 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 16:49:21.977454 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 16:49:21.977559 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 16:49:21.977575 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 16:49:21.977586 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 16:49:21.977597 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 16:49:21.977642 | orchestrator | ++ export MANAGER_VERSION=latest 2025-08-29 16:49:21.977654 | orchestrator | ++ MANAGER_VERSION=latest 2025-08-29 16:49:21.977665 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 16:49:21.977686 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 16:49:21.977697 | orchestrator | ++ export ARA=false 2025-08-29 16:49:21.977708 | orchestrator | ++ ARA=false 2025-08-29 16:49:21.977719 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 16:49:21.977737 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 16:49:21.977748 | orchestrator | ++ export TEMPEST=false 2025-08-29 16:49:21.977760 | orchestrator | ++ TEMPEST=false 2025-08-29 16:49:21.977771 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 16:49:21.977783 | orchestrator | ++ IS_ZUUL=true 2025-08-29 16:49:21.977794 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.184 2025-08-29 16:49:21.977806 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.184 2025-08-29 16:49:21.977818 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 16:49:21.977829 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 16:49:21.977845 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 16:49:21.977858 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 16:49:21.977870 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 16:49:21.977881 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 16:49:21.977894 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 16:49:21.977905 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 16:49:21.977917 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-08-29 16:49:22.040032 | orchestrator | + docker version 2025-08-29 16:49:22.364777 | orchestrator | Client: Docker Engine - Community 2025-08-29 16:49:22.364887 | orchestrator | Version: 27.5.1 2025-08-29 16:49:22.364907 | orchestrator | API version: 1.47 2025-08-29 16:49:22.364920 | orchestrator | Go version: go1.22.11 2025-08-29 16:49:22.364931 | orchestrator | Git commit: 9f9e405 2025-08-29 16:49:22.364942 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-08-29 16:49:22.364954 | orchestrator | OS/Arch: linux/amd64 2025-08-29 16:49:22.364966 | orchestrator | Context: default 2025-08-29 16:49:22.364976 | orchestrator | 2025-08-29 16:49:22.364988 | orchestrator | Server: Docker Engine - Community 2025-08-29 16:49:22.364999 | orchestrator | Engine: 2025-08-29 16:49:22.365011 | orchestrator | Version: 27.5.1 2025-08-29 16:49:22.365022 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-08-29 16:49:22.365068 | orchestrator | Go version: go1.22.11 2025-08-29 16:49:22.365080 | orchestrator | Git commit: 4c9b3b0 2025-08-29 16:49:22.365091 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-08-29 16:49:22.365102 | orchestrator | OS/Arch: linux/amd64 2025-08-29 16:49:22.365113 | orchestrator | Experimental: false 2025-08-29 16:49:22.365124 | orchestrator | containerd: 2025-08-29 16:49:22.365135 | orchestrator | Version: 1.7.27 2025-08-29 16:49:22.365146 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-08-29 16:49:22.365157 | orchestrator | runc: 2025-08-29 16:49:22.365168 | orchestrator | Version: 1.2.5 2025-08-29 16:49:22.365179 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-08-29 16:49:22.365190 | orchestrator | docker-init: 2025-08-29 16:49:22.365200 | orchestrator | Version: 0.19.0 2025-08-29 16:49:22.365212 | orchestrator | GitCommit: de40ad0 2025-08-29 16:49:22.369745 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-08-29 16:49:22.380407 | orchestrator | + set -e 2025-08-29 16:49:22.380463 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 16:49:22.380482 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 16:49:22.380500 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 16:49:22.380511 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 16:49:22.380522 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 16:49:22.380533 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 16:49:22.380559 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 16:49:22.380571 | orchestrator | ++ export MANAGER_VERSION=latest 2025-08-29 16:49:22.380582 | orchestrator | ++ MANAGER_VERSION=latest 2025-08-29 16:49:22.380592 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 16:49:22.380603 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 16:49:22.380653 | orchestrator | ++ export ARA=false 2025-08-29 16:49:22.380665 | orchestrator | ++ ARA=false 2025-08-29 16:49:22.380676 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 16:49:22.380687 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 16:49:22.380698 | orchestrator | ++ export TEMPEST=false 2025-08-29 16:49:22.380708 | orchestrator | ++ TEMPEST=false 2025-08-29 16:49:22.380719 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 16:49:22.380730 | orchestrator | ++ IS_ZUUL=true 2025-08-29 16:49:22.380740 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.184 2025-08-29 16:49:22.380751 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.184 2025-08-29 16:49:22.380762 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 16:49:22.380772 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 16:49:22.380783 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 16:49:22.380793 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 16:49:22.380807 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 16:49:22.380826 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 16:49:22.380844 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 16:49:22.380863 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 16:49:22.380907 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 16:49:22.380922 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 16:49:22.380932 | orchestrator | ++ INTERACTIVE=false 2025-08-29 16:49:22.380943 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 16:49:22.380959 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 16:49:22.380976 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-08-29 16:49:22.380987 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 16:49:22.380997 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-08-29 16:49:22.386741 | orchestrator | + set -e 2025-08-29 16:49:22.386791 | orchestrator | + VERSION=reef 2025-08-29 16:49:22.387847 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-08-29 16:49:22.397683 | orchestrator | + [[ -n ceph_version: reef ]] 2025-08-29 16:49:22.397719 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-08-29 16:49:22.403541 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-08-29 16:49:22.409330 | orchestrator | + set -e 2025-08-29 16:49:22.409363 | orchestrator | + VERSION=2024.2 2025-08-29 16:49:22.409801 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-08-29 16:49:22.414451 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-08-29 16:49:22.414477 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-08-29 16:49:22.420145 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-08-29 16:49:22.421585 | orchestrator | ++ semver latest 7.0.0 2025-08-29 16:49:22.482324 | orchestrator | + [[ -1 -ge 0 ]] 2025-08-29 16:49:22.482387 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 16:49:22.482400 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-08-29 16:49:22.482412 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-08-29 16:49:22.575828 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 16:49:22.577241 | orchestrator | + source /opt/venv/bin/activate 2025-08-29 16:49:22.578468 | orchestrator | ++ deactivate nondestructive 2025-08-29 16:49:22.578530 | orchestrator | ++ '[' -n '' ']' 2025-08-29 16:49:22.578539 | orchestrator | ++ '[' -n '' ']' 2025-08-29 16:49:22.578548 | orchestrator | ++ hash -r 2025-08-29 16:49:22.578555 | orchestrator | ++ '[' -n '' ']' 2025-08-29 16:49:22.578562 | orchestrator | ++ unset VIRTUAL_ENV 2025-08-29 16:49:22.578569 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-08-29 16:49:22.578576 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-08-29 16:49:22.578584 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-08-29 16:49:22.578593 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-08-29 16:49:22.578625 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-08-29 16:49:22.578634 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-08-29 16:49:22.578642 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 16:49:22.578649 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 16:49:22.578656 | orchestrator | ++ export PATH 2025-08-29 16:49:22.578663 | orchestrator | ++ '[' -n '' ']' 2025-08-29 16:49:22.578670 | orchestrator | ++ '[' -z '' ']' 2025-08-29 16:49:22.578677 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-08-29 16:49:22.578684 | orchestrator | ++ PS1='(venv) ' 2025-08-29 16:49:22.578690 | orchestrator | ++ export PS1 2025-08-29 16:49:22.578697 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-08-29 16:49:22.578704 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-08-29 16:49:22.578711 | orchestrator | ++ hash -r 2025-08-29 16:49:22.578737 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-08-29 16:49:24.178413 | orchestrator | 2025-08-29 16:49:24.178529 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-08-29 16:49:24.178547 | orchestrator | 2025-08-29 16:49:24.178560 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 16:49:24.843175 | orchestrator | ok: [testbed-manager] 2025-08-29 16:49:24.843282 | orchestrator | 2025-08-29 16:49:24.843299 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-08-29 16:49:26.051313 | orchestrator | changed: [testbed-manager] 2025-08-29 16:49:26.051405 | orchestrator | 2025-08-29 16:49:26.051422 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-08-29 16:49:26.051436 | orchestrator | 2025-08-29 16:49:26.051448 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 16:49:28.688200 | orchestrator | ok: [testbed-manager] 2025-08-29 16:49:28.688305 | orchestrator | 2025-08-29 16:49:28.688322 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-08-29 16:49:28.727881 | orchestrator | ok: [testbed-manager] 2025-08-29 16:49:28.727949 | orchestrator | 2025-08-29 16:49:28.727964 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-08-29 16:49:29.223955 | orchestrator | changed: [testbed-manager] 2025-08-29 16:49:29.224047 | orchestrator | 2025-08-29 16:49:29.224063 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-08-29 16:49:29.271228 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:49:29.271320 | orchestrator | 2025-08-29 16:49:29.271337 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-08-29 16:49:29.646083 | orchestrator | changed: [testbed-manager] 2025-08-29 16:49:29.646170 | orchestrator | 2025-08-29 16:49:29.646185 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-08-29 16:49:29.708283 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:49:29.708364 | orchestrator | 2025-08-29 16:49:29.708378 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-08-29 16:49:30.057756 | orchestrator | ok: [testbed-manager] 2025-08-29 16:49:30.058791 | orchestrator | 2025-08-29 16:49:30.058859 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-08-29 16:49:30.197202 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:49:30.197307 | orchestrator | 2025-08-29 16:49:30.197324 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-08-29 16:49:30.197337 | orchestrator | 2025-08-29 16:49:30.197352 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 16:49:33.157764 | orchestrator | ok: [testbed-manager] 2025-08-29 16:49:33.157831 | orchestrator | 2025-08-29 16:49:33.157838 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-08-29 16:49:33.291906 | orchestrator | included: osism.services.traefik for testbed-manager 2025-08-29 16:49:33.291993 | orchestrator | 2025-08-29 16:49:33.292008 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-08-29 16:49:33.358268 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-08-29 16:49:33.358367 | orchestrator | 2025-08-29 16:49:33.358391 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-08-29 16:49:34.512664 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-08-29 16:49:34.512775 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-08-29 16:49:34.512790 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-08-29 16:49:34.512803 | orchestrator | 2025-08-29 16:49:34.512822 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-08-29 16:49:36.519217 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-08-29 16:49:36.519425 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-08-29 16:49:36.519447 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-08-29 16:49:36.519460 | orchestrator | 2025-08-29 16:49:36.519472 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-08-29 16:49:37.224083 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 16:49:37.224190 | orchestrator | changed: [testbed-manager] 2025-08-29 16:49:37.224207 | orchestrator | 2025-08-29 16:49:37.224222 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-08-29 16:49:37.995387 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 16:49:37.995489 | orchestrator | changed: [testbed-manager] 2025-08-29 16:49:37.995505 | orchestrator | 2025-08-29 16:49:37.995518 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-08-29 16:49:38.064832 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:49:38.064939 | orchestrator | 2025-08-29 16:49:38.064962 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-08-29 16:49:38.517777 | orchestrator | ok: [testbed-manager] 2025-08-29 16:49:38.517882 | orchestrator | 2025-08-29 16:49:38.517893 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-08-29 16:49:38.603968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-08-29 16:49:38.604059 | orchestrator | 2025-08-29 16:49:38.604069 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-08-29 16:49:39.712512 | orchestrator | changed: [testbed-manager] 2025-08-29 16:49:39.712643 | orchestrator | 2025-08-29 16:49:39.712661 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-08-29 16:49:40.623541 | orchestrator | changed: [testbed-manager] 2025-08-29 16:49:40.623727 | orchestrator | 2025-08-29 16:49:40.623747 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-08-29 16:49:51.921308 | orchestrator | changed: [testbed-manager] 2025-08-29 16:49:51.921423 | orchestrator | 2025-08-29 16:49:51.921441 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-08-29 16:49:51.974313 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:49:51.974404 | orchestrator | 2025-08-29 16:49:51.974419 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-08-29 16:49:51.974432 | orchestrator | 2025-08-29 16:49:51.974443 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 16:49:53.836385 | orchestrator | ok: [testbed-manager] 2025-08-29 16:49:53.836492 | orchestrator | 2025-08-29 16:49:53.836539 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-08-29 16:49:53.963163 | orchestrator | included: osism.services.manager for testbed-manager 2025-08-29 16:49:53.963280 | orchestrator | 2025-08-29 16:49:53.963306 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-08-29 16:49:54.022427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 16:49:54.022520 | orchestrator | 2025-08-29 16:49:54.022536 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-08-29 16:49:56.947515 | orchestrator | ok: [testbed-manager] 2025-08-29 16:49:56.947678 | orchestrator | 2025-08-29 16:49:56.947698 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-08-29 16:49:56.995705 | orchestrator | ok: [testbed-manager] 2025-08-29 16:49:56.995831 | orchestrator | 2025-08-29 16:49:56.995859 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-08-29 16:49:57.142517 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-08-29 16:49:57.142605 | orchestrator | 2025-08-29 16:49:57.142683 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-08-29 16:50:00.280797 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-08-29 16:50:00.280914 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-08-29 16:50:00.280936 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-08-29 16:50:00.280952 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-08-29 16:50:00.280968 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-08-29 16:50:00.280984 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-08-29 16:50:00.281000 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-08-29 16:50:00.281016 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-08-29 16:50:00.281033 | orchestrator | 2025-08-29 16:50:00.281052 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-08-29 16:50:00.977943 | orchestrator | changed: [testbed-manager] 2025-08-29 16:50:00.978105 | orchestrator | 2025-08-29 16:50:00.978124 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-08-29 16:50:01.649819 | orchestrator | changed: [testbed-manager] 2025-08-29 16:50:01.649921 | orchestrator | 2025-08-29 16:50:01.649936 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-08-29 16:50:01.726477 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-08-29 16:50:01.726566 | orchestrator | 2025-08-29 16:50:01.726576 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-08-29 16:50:02.933048 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-08-29 16:50:02.933153 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-08-29 16:50:02.933165 | orchestrator | 2025-08-29 16:50:02.933175 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-08-29 16:50:03.566313 | orchestrator | changed: [testbed-manager] 2025-08-29 16:50:03.566423 | orchestrator | 2025-08-29 16:50:03.566441 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-08-29 16:50:03.619289 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:50:03.619413 | orchestrator | 2025-08-29 16:50:03.619431 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-08-29 16:50:03.677933 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:50:03.678078 | orchestrator | 2025-08-29 16:50:03.678095 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-08-29 16:50:03.732729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-08-29 16:50:03.732813 | orchestrator | 2025-08-29 16:50:03.732828 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-08-29 16:50:05.239732 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 16:50:05.239851 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 16:50:05.239911 | orchestrator | changed: [testbed-manager] 2025-08-29 16:50:05.239935 | orchestrator | 2025-08-29 16:50:05.239948 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-08-29 16:50:05.926232 | orchestrator | changed: [testbed-manager] 2025-08-29 16:50:05.926341 | orchestrator | 2025-08-29 16:50:05.926357 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-08-29 16:50:05.978676 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:50:05.978779 | orchestrator | 2025-08-29 16:50:05.978797 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-08-29 16:50:06.069930 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-08-29 16:50:06.070067 | orchestrator | 2025-08-29 16:50:06.070085 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-08-29 16:50:06.638266 | orchestrator | changed: [testbed-manager] 2025-08-29 16:50:06.638362 | orchestrator | 2025-08-29 16:50:06.638379 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-08-29 16:50:07.124445 | orchestrator | changed: [testbed-manager] 2025-08-29 16:50:07.124503 | orchestrator | 2025-08-29 16:50:07.124510 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-08-29 16:50:08.475740 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-08-29 16:50:08.475831 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-08-29 16:50:08.475857 | orchestrator | 2025-08-29 16:50:08.475879 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-08-29 16:50:09.112596 | orchestrator | changed: [testbed-manager] 2025-08-29 16:50:09.112731 | orchestrator | 2025-08-29 16:50:09.112748 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-08-29 16:50:09.539616 | orchestrator | ok: [testbed-manager] 2025-08-29 16:50:09.539766 | orchestrator | 2025-08-29 16:50:09.539794 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-08-29 16:50:09.947676 | orchestrator | changed: [testbed-manager] 2025-08-29 16:50:09.947775 | orchestrator | 2025-08-29 16:50:09.947790 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-08-29 16:50:10.003880 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:50:10.003966 | orchestrator | 2025-08-29 16:50:10.003983 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-08-29 16:50:10.080164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-08-29 16:50:10.080266 | orchestrator | 2025-08-29 16:50:10.080282 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-08-29 16:50:10.142476 | orchestrator | ok: [testbed-manager] 2025-08-29 16:50:10.142546 | orchestrator | 2025-08-29 16:50:10.142554 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-08-29 16:50:12.302223 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-08-29 16:50:12.302289 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-08-29 16:50:12.302295 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-08-29 16:50:12.302299 | orchestrator | 2025-08-29 16:50:12.302304 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-08-29 16:50:13.187228 | orchestrator | changed: [testbed-manager] 2025-08-29 16:50:13.187313 | orchestrator | 2025-08-29 16:50:13.187329 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-08-29 16:50:13.977668 | orchestrator | changed: [testbed-manager] 2025-08-29 16:50:13.977784 | orchestrator | 2025-08-29 16:50:13.977809 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-08-29 16:50:14.759847 | orchestrator | changed: [testbed-manager] 2025-08-29 16:50:14.759931 | orchestrator | 2025-08-29 16:50:14.759944 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-08-29 16:50:14.849520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-08-29 16:50:14.849609 | orchestrator | 2025-08-29 16:50:14.849658 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-08-29 16:50:14.908457 | orchestrator | ok: [testbed-manager] 2025-08-29 16:50:14.908532 | orchestrator | 2025-08-29 16:50:14.908541 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-08-29 16:50:15.684971 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-08-29 16:50:15.685099 | orchestrator | 2025-08-29 16:50:15.685126 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-08-29 16:50:15.763968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-08-29 16:50:15.764069 | orchestrator | 2025-08-29 16:50:15.764084 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-08-29 16:50:16.468012 | orchestrator | changed: [testbed-manager] 2025-08-29 16:50:16.468109 | orchestrator | 2025-08-29 16:50:16.468125 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-08-29 16:50:17.091956 | orchestrator | ok: [testbed-manager] 2025-08-29 16:50:17.092027 | orchestrator | 2025-08-29 16:50:17.092035 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-08-29 16:50:17.155830 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:50:17.155920 | orchestrator | 2025-08-29 16:50:17.155935 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-08-29 16:50:17.217031 | orchestrator | ok: [testbed-manager] 2025-08-29 16:50:17.217110 | orchestrator | 2025-08-29 16:50:17.217121 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-08-29 16:50:18.119952 | orchestrator | changed: [testbed-manager] 2025-08-29 16:50:18.120056 | orchestrator | 2025-08-29 16:50:18.120071 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-08-29 16:51:53.440788 | orchestrator | changed: [testbed-manager] 2025-08-29 16:51:53.440900 | orchestrator | 2025-08-29 16:51:53.440919 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-08-29 16:51:54.502208 | orchestrator | ok: [testbed-manager] 2025-08-29 16:51:54.502311 | orchestrator | 2025-08-29 16:51:54.502328 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-08-29 16:51:54.566561 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:51:54.566661 | orchestrator | 2025-08-29 16:51:54.566709 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-08-29 16:51:57.388020 | orchestrator | changed: [testbed-manager] 2025-08-29 16:51:57.388135 | orchestrator | 2025-08-29 16:51:57.388160 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-08-29 16:51:57.489876 | orchestrator | ok: [testbed-manager] 2025-08-29 16:51:57.489966 | orchestrator | 2025-08-29 16:51:57.489982 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-08-29 16:51:57.489995 | orchestrator | 2025-08-29 16:51:57.490006 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-08-29 16:51:57.545026 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:51:57.545112 | orchestrator | 2025-08-29 16:51:57.545127 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-08-29 16:52:57.608893 | orchestrator | Pausing for 60 seconds 2025-08-29 16:52:57.609007 | orchestrator | changed: [testbed-manager] 2025-08-29 16:52:57.609023 | orchestrator | 2025-08-29 16:52:57.609037 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-08-29 16:53:01.328670 | orchestrator | changed: [testbed-manager] 2025-08-29 16:53:01.328788 | orchestrator | 2025-08-29 16:53:01.328806 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-08-29 16:54:03.629055 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-08-29 16:54:03.629169 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-08-29 16:54:03.629185 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2025-08-29 16:54:03.629198 | orchestrator | changed: [testbed-manager] 2025-08-29 16:54:03.629211 | orchestrator | 2025-08-29 16:54:03.629223 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-08-29 16:54:14.622325 | orchestrator | changed: [testbed-manager] 2025-08-29 16:54:14.623244 | orchestrator | 2025-08-29 16:54:14.623311 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-08-29 16:54:14.726802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-08-29 16:54:14.726892 | orchestrator | 2025-08-29 16:54:14.726907 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-08-29 16:54:14.726919 | orchestrator | 2025-08-29 16:54:14.726931 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-08-29 16:54:14.782534 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:54:14.782589 | orchestrator | 2025-08-29 16:54:14.782603 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 16:54:14.782615 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-08-29 16:54:14.782627 | orchestrator | 2025-08-29 16:54:14.893328 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 16:54:14.893405 | orchestrator | + deactivate 2025-08-29 16:54:14.893419 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-08-29 16:54:14.893456 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 16:54:14.893468 | orchestrator | + export PATH 2025-08-29 16:54:14.893480 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-08-29 16:54:14.893492 | orchestrator | + '[' -n '' ']' 2025-08-29 16:54:14.893503 | orchestrator | + hash -r 2025-08-29 16:54:14.893515 | orchestrator | + '[' -n '' ']' 2025-08-29 16:54:14.893526 | orchestrator | + unset VIRTUAL_ENV 2025-08-29 16:54:14.893537 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-08-29 16:54:14.893549 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-08-29 16:54:14.893560 | orchestrator | + unset -f deactivate 2025-08-29 16:54:14.893572 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-08-29 16:54:14.900228 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 16:54:14.900253 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-08-29 16:54:14.900264 | orchestrator | + local max_attempts=60 2025-08-29 16:54:14.900275 | orchestrator | + local name=ceph-ansible 2025-08-29 16:54:14.900286 | orchestrator | + local attempt_num=1 2025-08-29 16:54:14.901770 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 16:54:14.942684 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 16:54:14.942717 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-08-29 16:54:14.942758 | orchestrator | + local max_attempts=60 2025-08-29 16:54:14.942770 | orchestrator | + local name=kolla-ansible 2025-08-29 16:54:14.942781 | orchestrator | + local attempt_num=1 2025-08-29 16:54:14.943798 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-08-29 16:54:14.990272 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 16:54:14.990315 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-08-29 16:54:14.990327 | orchestrator | + local max_attempts=60 2025-08-29 16:54:14.990338 | orchestrator | + local name=osism-ansible 2025-08-29 16:54:14.990350 | orchestrator | + local attempt_num=1 2025-08-29 16:54:14.991329 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-08-29 16:54:15.037003 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 16:54:15.037053 | orchestrator | + [[ true == \t\r\u\e ]] 2025-08-29 16:54:15.037066 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-08-29 16:54:15.839549 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-08-29 16:54:16.097037 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-08-29 16:54:16.097127 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-08-29 16:54:16.097142 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-08-29 16:54:16.097155 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-08-29 16:54:16.097189 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-08-29 16:54:16.097211 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2025-08-29 16:54:16.097223 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2025-08-29 16:54:16.097234 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2025-08-29 16:54:16.097245 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2025-08-29 16:54:16.097256 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-08-29 16:54:16.097267 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2025-08-29 16:54:16.097277 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-08-29 16:54:16.097288 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-08-29 16:54:16.097299 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-08-29 16:54:16.097310 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-08-29 16:54:16.107006 | orchestrator | ++ semver latest 7.0.0 2025-08-29 16:54:16.176486 | orchestrator | + [[ -1 -ge 0 ]] 2025-08-29 16:54:16.176557 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 16:54:16.176572 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-08-29 16:54:16.180858 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-08-29 16:54:28.635354 | orchestrator | 2025-08-29 16:54:28 | INFO  | Task 8e97f1f6-6f11-4e33-a4e7-b97ba354f291 (resolvconf) was prepared for execution. 2025-08-29 16:54:28.635465 | orchestrator | 2025-08-29 16:54:28 | INFO  | It takes a moment until task 8e97f1f6-6f11-4e33-a4e7-b97ba354f291 (resolvconf) has been started and output is visible here. 2025-08-29 16:54:43.076786 | orchestrator | 2025-08-29 16:54:43.076897 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-08-29 16:54:43.076916 | orchestrator | 2025-08-29 16:54:43.076929 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 16:54:43.076944 | orchestrator | Friday 29 August 2025 16:54:32 +0000 (0:00:00.187) 0:00:00.187 ********* 2025-08-29 16:54:43.076956 | orchestrator | ok: [testbed-manager] 2025-08-29 16:54:43.076968 | orchestrator | 2025-08-29 16:54:43.076984 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-08-29 16:54:43.076997 | orchestrator | Friday 29 August 2025 16:54:36 +0000 (0:00:03.918) 0:00:04.105 ********* 2025-08-29 16:54:43.077027 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:54:43.077040 | orchestrator | 2025-08-29 16:54:43.077052 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-08-29 16:54:43.077063 | orchestrator | Friday 29 August 2025 16:54:36 +0000 (0:00:00.052) 0:00:04.158 ********* 2025-08-29 16:54:43.077075 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-08-29 16:54:43.077088 | orchestrator | 2025-08-29 16:54:43.077099 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-08-29 16:54:43.077111 | orchestrator | Friday 29 August 2025 16:54:36 +0000 (0:00:00.081) 0:00:04.240 ********* 2025-08-29 16:54:43.077122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 16:54:43.077134 | orchestrator | 2025-08-29 16:54:43.077145 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-08-29 16:54:43.077157 | orchestrator | Friday 29 August 2025 16:54:37 +0000 (0:00:00.072) 0:00:04.313 ********* 2025-08-29 16:54:43.077168 | orchestrator | ok: [testbed-manager] 2025-08-29 16:54:43.077179 | orchestrator | 2025-08-29 16:54:43.077191 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-08-29 16:54:43.077202 | orchestrator | Friday 29 August 2025 16:54:38 +0000 (0:00:01.203) 0:00:05.517 ********* 2025-08-29 16:54:43.077214 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:54:43.077225 | orchestrator | 2025-08-29 16:54:43.077236 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-08-29 16:54:43.077248 | orchestrator | Friday 29 August 2025 16:54:38 +0000 (0:00:00.069) 0:00:05.586 ********* 2025-08-29 16:54:43.077259 | orchestrator | ok: [testbed-manager] 2025-08-29 16:54:43.077271 | orchestrator | 2025-08-29 16:54:43.077282 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-08-29 16:54:43.077293 | orchestrator | Friday 29 August 2025 16:54:38 +0000 (0:00:00.477) 0:00:06.064 ********* 2025-08-29 16:54:43.077305 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:54:43.077318 | orchestrator | 2025-08-29 16:54:43.077332 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-08-29 16:54:43.077346 | orchestrator | Friday 29 August 2025 16:54:38 +0000 (0:00:00.086) 0:00:06.150 ********* 2025-08-29 16:54:43.077359 | orchestrator | changed: [testbed-manager] 2025-08-29 16:54:43.077372 | orchestrator | 2025-08-29 16:54:43.077385 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-08-29 16:54:43.077398 | orchestrator | Friday 29 August 2025 16:54:39 +0000 (0:00:00.550) 0:00:06.701 ********* 2025-08-29 16:54:43.077411 | orchestrator | changed: [testbed-manager] 2025-08-29 16:54:43.077424 | orchestrator | 2025-08-29 16:54:43.077437 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-08-29 16:54:43.077449 | orchestrator | Friday 29 August 2025 16:54:40 +0000 (0:00:01.093) 0:00:07.794 ********* 2025-08-29 16:54:43.077462 | orchestrator | ok: [testbed-manager] 2025-08-29 16:54:43.077475 | orchestrator | 2025-08-29 16:54:43.077488 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-08-29 16:54:43.077501 | orchestrator | Friday 29 August 2025 16:54:41 +0000 (0:00:00.985) 0:00:08.779 ********* 2025-08-29 16:54:43.077514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-08-29 16:54:43.077526 | orchestrator | 2025-08-29 16:54:43.077537 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-08-29 16:54:43.077557 | orchestrator | Friday 29 August 2025 16:54:41 +0000 (0:00:00.074) 0:00:08.854 ********* 2025-08-29 16:54:43.077569 | orchestrator | changed: [testbed-manager] 2025-08-29 16:54:43.077580 | orchestrator | 2025-08-29 16:54:43.077592 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 16:54:43.077604 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 16:54:43.077624 | orchestrator | 2025-08-29 16:54:43.077635 | orchestrator | 2025-08-29 16:54:43.077647 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 16:54:43.077658 | orchestrator | Friday 29 August 2025 16:54:42 +0000 (0:00:01.225) 0:00:10.080 ********* 2025-08-29 16:54:43.077670 | orchestrator | =============================================================================== 2025-08-29 16:54:43.077681 | orchestrator | Gathering Facts --------------------------------------------------------- 3.92s 2025-08-29 16:54:43.077693 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.23s 2025-08-29 16:54:43.077704 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.20s 2025-08-29 16:54:43.077715 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.09s 2025-08-29 16:54:43.077727 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.99s 2025-08-29 16:54:43.077762 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2025-08-29 16:54:43.077790 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2025-08-29 16:54:43.077803 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-08-29 16:54:43.077815 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-08-29 16:54:43.077826 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-08-29 16:54:43.077838 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-08-29 16:54:43.077849 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-08-29 16:54:43.077886 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-08-29 16:54:43.427244 | orchestrator | + osism apply sshconfig 2025-08-29 16:54:55.585683 | orchestrator | 2025-08-29 16:54:55 | INFO  | Task 0e3fce4a-7aba-4b81-8132-4e7e6a035d80 (sshconfig) was prepared for execution. 2025-08-29 16:54:55.585814 | orchestrator | 2025-08-29 16:54:55 | INFO  | It takes a moment until task 0e3fce4a-7aba-4b81-8132-4e7e6a035d80 (sshconfig) has been started and output is visible here. 2025-08-29 16:55:07.974363 | orchestrator | 2025-08-29 16:55:07.974458 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-08-29 16:55:07.974475 | orchestrator | 2025-08-29 16:55:07.974487 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-08-29 16:55:07.974498 | orchestrator | Friday 29 August 2025 16:54:59 +0000 (0:00:00.167) 0:00:00.167 ********* 2025-08-29 16:55:07.974509 | orchestrator | ok: [testbed-manager] 2025-08-29 16:55:07.974520 | orchestrator | 2025-08-29 16:55:07.974531 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-08-29 16:55:07.974542 | orchestrator | Friday 29 August 2025 16:55:00 +0000 (0:00:00.655) 0:00:00.823 ********* 2025-08-29 16:55:07.974552 | orchestrator | changed: [testbed-manager] 2025-08-29 16:55:07.974564 | orchestrator | 2025-08-29 16:55:07.974574 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-08-29 16:55:07.974585 | orchestrator | Friday 29 August 2025 16:55:00 +0000 (0:00:00.543) 0:00:01.366 ********* 2025-08-29 16:55:07.974596 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-08-29 16:55:07.974607 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-08-29 16:55:07.974619 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-08-29 16:55:07.974630 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-08-29 16:55:07.974640 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-08-29 16:55:07.974651 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-08-29 16:55:07.974662 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-08-29 16:55:07.974696 | orchestrator | 2025-08-29 16:55:07.974722 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-08-29 16:55:07.974734 | orchestrator | Friday 29 August 2025 16:55:07 +0000 (0:00:06.050) 0:00:07.417 ********* 2025-08-29 16:55:07.974776 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:55:07.974787 | orchestrator | 2025-08-29 16:55:07.974798 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-08-29 16:55:07.974809 | orchestrator | Friday 29 August 2025 16:55:07 +0000 (0:00:00.067) 0:00:07.485 ********* 2025-08-29 16:55:07.974820 | orchestrator | changed: [testbed-manager] 2025-08-29 16:55:07.974830 | orchestrator | 2025-08-29 16:55:07.974841 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 16:55:07.974854 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 16:55:07.974865 | orchestrator | 2025-08-29 16:55:07.974876 | orchestrator | 2025-08-29 16:55:07.974887 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 16:55:07.974898 | orchestrator | Friday 29 August 2025 16:55:07 +0000 (0:00:00.635) 0:00:08.120 ********* 2025-08-29 16:55:07.974908 | orchestrator | =============================================================================== 2025-08-29 16:55:07.974919 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.05s 2025-08-29 16:55:07.974930 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.66s 2025-08-29 16:55:07.974942 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.64s 2025-08-29 16:55:07.974955 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.54s 2025-08-29 16:55:07.974967 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-08-29 16:55:08.281507 | orchestrator | + osism apply known-hosts 2025-08-29 16:55:20.426836 | orchestrator | 2025-08-29 16:55:20 | INFO  | Task 24cabb3b-dce9-49cc-a0a6-3086aa62a524 (known-hosts) was prepared for execution. 2025-08-29 16:55:20.426936 | orchestrator | 2025-08-29 16:55:20 | INFO  | It takes a moment until task 24cabb3b-dce9-49cc-a0a6-3086aa62a524 (known-hosts) has been started and output is visible here. 2025-08-29 16:55:37.696322 | orchestrator | 2025-08-29 16:55:37.696436 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-08-29 16:55:37.696453 | orchestrator | 2025-08-29 16:55:37.696465 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-08-29 16:55:37.696477 | orchestrator | Friday 29 August 2025 16:55:24 +0000 (0:00:00.191) 0:00:00.191 ********* 2025-08-29 16:55:37.696490 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-08-29 16:55:37.696501 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-08-29 16:55:37.696513 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-08-29 16:55:37.696524 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-08-29 16:55:37.696535 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-08-29 16:55:37.696546 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-08-29 16:55:37.696557 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-08-29 16:55:37.696568 | orchestrator | 2025-08-29 16:55:37.696579 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-08-29 16:55:37.696591 | orchestrator | Friday 29 August 2025 16:55:30 +0000 (0:00:06.190) 0:00:06.382 ********* 2025-08-29 16:55:37.696603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-08-29 16:55:37.696616 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-08-29 16:55:37.696627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-08-29 16:55:37.696659 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-08-29 16:55:37.696670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-08-29 16:55:37.696692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-08-29 16:55:37.696704 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-08-29 16:55:37.696715 | orchestrator | 2025-08-29 16:55:37.696726 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 16:55:37.696737 | orchestrator | Friday 29 August 2025 16:55:30 +0000 (0:00:00.171) 0:00:06.553 ********* 2025-08-29 16:55:37.696790 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPGXTH+SfweOznF48UK7jqcbiMzMp7rDfSYcwx8bLzXZ) 2025-08-29 16:55:37.696808 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDgeBDSQ4X7SIvd5FP4zUyh/2jsEjrGGPOtSxsur23RkTKtSx+oDaAqiRd6uf5iHcsz/87WczLTLp1iC5ZxvSeL52ZnkdHtgwoCStsQJ9VOfze2ibBPLod91uYKMQEIaBZbVh0l1kykeOQ4kZl9R7YK2ETMEt6wExAKxXx9AeVGuTgt070hmoqO50GWIONFI6FCUiIGfUG2xr0GCee37Dl/6SrYhOoaOy+xqU1BHI8dpXp6GUj1cwxeBPTCuNg/sewyFgRHfX0/BSCge8x2CO3tVXMwtPs63PhCqacoaEQAK/le8+FbNt6Amw5ClXtv3dhwNXxG99TwTqDzlzMUuIRVOizU/DY1ZKfswVAjkOGbBAfB6SKqryEr6NZpVETNX1X+gCq9mI9ErTb6VrORLW+/3wPXkdjiw93JtT+qKApqJsHtfJFDgt5hxkEmRExkNJwNHPfvA3IoSkx20GjAh+KIyLfPRu8cF8R1eQpb7pONv8qkB1E8y0Avkp0n+1TbzLE=) 2025-08-29 16:55:37.696824 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEat77JAG5bsfxOOe8AzwbZzZRZop1ai88mwUXZg5WG0MoeUELvMf1IHStfJv2tX3i/HLWLEQMHJay6sIEDf8IE=) 2025-08-29 16:55:37.696837 | orchestrator | 2025-08-29 16:55:37.696848 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 16:55:37.696858 | orchestrator | Friday 29 August 2025 16:55:32 +0000 (0:00:01.260) 0:00:07.814 ********* 2025-08-29 16:55:37.696869 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA6YeSOrhInyy2983Q+DwixXqaPEhhsTWsqB+700WXz6) 2025-08-29 16:55:37.696911 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8iIknOp9pKkv56U2iSdMKoF2YyxQ3b6mYGgN9pSZqs+DF2KLIEgQJ/CUiRvuOvFG5bDEJ/fCLOk0NRoiX8b2C0n5RkGMvcPiUagDQxDouBgqhVnIRKufveeZPz91rKLx8FG/iH+Z6cmVM2qUHFQsbptOXt6carbseZhtNgeYK/hfiLqmuVZh8/24uJn7lVqZ6ow4/aMahLyjqkmelnVV8u9lHfk5R+6Hq0onn+G3myK4U01Oq3QWU/5UvGj0vM0JJn249V64WJSBfTEoJkxM8VaudQ/eTX9LMXQgnmX4iB73CB7b7X+dMSThCK1lUNqrdO8DgEyoii/RvokXSHaRULGsr6k6TZz3GwvXR0iFxprvceDG9b/F71DrCu61S7BI/xaggi0VofefPrwetqWoOAXAUCZV9LLmUHB+hv8Y6hE9iDv6tEmYH61hB5suT59oC57PDgHSmyLVXnI9yNfboRjgzowBejX7UmQxtKwodHNoObWpDQA8ox3SN9BbiPWk=) 2025-08-29 16:55:37.696924 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBLOa1l7t/DozFS5HpYS7AblfuQdnXcsAvFIsQ+UE3yCDEz25ejZN4FYD1bnCp/7y5HuiP+IaJ7PXRMubIPkVI0=) 2025-08-29 16:55:37.696935 | orchestrator | 2025-08-29 16:55:37.696946 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 16:55:37.696957 | orchestrator | Friday 29 August 2025 16:55:33 +0000 (0:00:01.122) 0:00:08.936 ********* 2025-08-29 16:55:37.696968 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1edKhdnjrTFB5UALO2SbQwpgls3X7qWCiC4X+Jw2WA9/os+xEUjT86jiL8KHwqMYl9t/wbi175hiFBqMlpPOGVzK345UhVoG3Pf199m/Rszao0jM1VQY2zQ9KAcxNouX1LjrlaQ4VSw+14ERyU6/d1T7i8VyvyJ3327/oiqkQzyWAV2upQrSgm0xD1Qsi+y431YUBNECzp+lA2wRPZgNV9D1UKqzduI7g0Zdk8RaaKpoSDpZqyHIBgdD90jCev8ZN6wbadWIDY0c75SmWM8CaN6tyoZTL65qyujyOETDfY5/UW2t1orU1mP3yIIjaOSQGtMZdbnCyTqYIx/iRnRkvLxZsGUR8A6e/UZUregnWuJniZvymKVpmUFBsYF4SCQKjk991vgxXEDrE6XCaW/k6z8AEQzslL9phJmumINPf3e4xpn7CkA6MURMUb/z/6IzcGH1wcOwm//R3Gz9vyBgEO/1OVOD8fasCCOkPy2t5wRnH1Go159DujY12zbxMOQM=) 2025-08-29 16:55:37.696988 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOu6HRsgjMfAdN1CZFplOp3I+j7i751pf9O/Gvog28PTmCUl6oP8vDSbabbR1KE2fbbAl/KVMNMSNzJ0/8J8xec=) 2025-08-29 16:55:37.697000 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB/HyJPXZUnGa6b6JmOYZnHySf1zMLjewsBtdginx0Dr) 2025-08-29 16:55:37.697011 | orchestrator | 2025-08-29 16:55:37.697021 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 16:55:37.697032 | orchestrator | Friday 29 August 2025 16:55:34 +0000 (0:00:01.112) 0:00:10.049 ********* 2025-08-29 16:55:37.697105 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKfK9qDn9seL6Ej898MleuxRi8HzXHdaI4cYtxL3FAku) 2025-08-29 16:55:37.697117 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCb+8mxSHAiSXinLHFBS5NSMLaJdnFOft7VEnlSPOffNXpGPcwLl5nXRY5Tg4MgYzbsJLXMgOwHJEA66ymkcsx82EQptKiwWxhZZNo/G3JOvNRZOpe8z0D1SWXkGlZudJnyb+ZeBDl+3RObXU08WgP6VCkr6w3FjRKJbL9fv93suXNHPDcHZWc/uUr8RdK+dap+BJ/u8MD58Z3vzzHAGGbaHiiHWccIT3Ikx+jbucYnH+f2yAZuEZPR047zasIbx3uxIQ4Xvb6nBMdLsa8vL+SibF/xu6YyrQiUzF8Ybkj4pt+206JAb5vFVpkOsw2WJEZA3GItTU3sGr2UeRPIG3alrXG8+5moucSleRv+cc3RpXyKO9O5Ca4IVgYg8BylgPxMjpWgckhDfkReHbpD8l4OY9rbTUDYOjVdNdbyaZL6LhD7uhXCDwGgu6NC8CJTfJUiH2Qo7BgPjmj6fQ4kJ6uclwfGr1RMxY2uNU6P/QZkGFz2SNv0grt/cY2azLTiF/0=) 2025-08-29 16:55:37.697128 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBdkHucxFGxEuKxzwDooyhCrlO3+TB4FsM4gSW1iS4j76tqRMwAG0mIQ7mvCwb0b6hAvfBw6r8NT4LoB78L0yKs=) 2025-08-29 16:55:37.697139 | orchestrator | 2025-08-29 16:55:37.697150 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 16:55:37.697161 | orchestrator | Friday 29 August 2025 16:55:35 +0000 (0:00:01.067) 0:00:11.116 ********* 2025-08-29 16:55:37.697172 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCb+Z7SzLT6DOpOZrEksiVPu4he62zyZ1DblmyswwXoq0dDws8i8g/qPoIm2DkCrSrRzKL47VUrxIjLGcWF7t7oFugegBNIitHkKlujzULAeL9oplgPFVZIhGx+y2QT4QrCDNpRnWcAmJZq1ZZD7LA71vJ5+aLBxykJZJXniD0eTneXqVApco8PaDkg8U0Vw1i8NadKic+Py/3yVHN1RCj/8IhpO7VEYBrCMWwnIqiBZrbLgszxZblEKZhcmV2avsKVZQVd8qY5cP/qOY0IC3db6oYzTwjRJft0FD7stP98sL2TL089N+MqRxTyq6JNrINFAN8puW/Ptajc4q7o0HRhpkWi69wWXP8lb+S4Z6lmZX3ADw4RERxb8IK8FbRgit5UIJlH2yemqDZwDyin7iPLTzKfjuys37/BFb7NkZV/BfclswXhds2kM3O1M5G9HNQ5oedm2KNw5dy6X6Bk9Tn94YbuD+i5rdaIhlgQPAXxhVqB1QgGA2XciaHhuJJmCVM=) 2025-08-29 16:55:37.697183 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAj7wNI9q5UuQc21/tiVtG7UFH/HU4vDlLgp129AVh0LSG8RclfJVHen0dTnXt8JIYttDTyxnCO+U9JVl96L9Ys=) 2025-08-29 16:55:37.697194 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKzgcfrLDgsqfHFoFYncdnX5CqrHUe5z/zn5w5U59OF6) 2025-08-29 16:55:37.697205 | orchestrator | 2025-08-29 16:55:37.697216 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 16:55:37.697227 | orchestrator | Friday 29 August 2025 16:55:36 +0000 (0:00:01.135) 0:00:12.251 ********* 2025-08-29 16:55:37.697245 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHbp9YWKNGAxRRkVDbsAa+g/RESgHmFl34wPaXw9qXmC) 2025-08-29 16:55:48.980430 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCV2y2GIKnoNOcmgyi0K08KvTfDSnZ1sRpyCnCzwFoX4EJ6/+RWRQKYjA9NclljzDcIKXJcpOYfsXWPMeUbkUsIlX2lGfYbA7kd/w7y0DHyHZPiB1mDpnTkaSdav8E8scW+liIrVkv9G07dFX4SfWgsi7BCsHzhjN7pKuDBkzNlLSeFTq5ggtI3sGaXvo3FfZRimg0jzJpXqakIq6lhAUCHg3RlqWfjPMlWLbsk0qykbVED8/vRO9I17P7YeeyKEvJ/yIJnFCp2d9YHjeIs3Z4MgsSxUmkXymINL+G3hW/B0LleMk8PJe99ikVVEFaOFkryJlBj86mQgHXpJbkK4lX3bDKPpxTJBnc2bl3jj50nGSN6/efWpKGN9F2RBN4qvLOOonEXjQ2xBXkGZxvfJ7XgtY3G+QC3Ev05/qzB9y2AZ+AyVsm+qyxy4wolNKwtgs6lMHEzUTWWyxEUHpQ1jA3MrxhTRndUrYIpbX1EkRf7MfAZ6sMhoP0Qx4kMCt2JebE=) 2025-08-29 16:55:48.980558 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK72IaWhmAQwH5kJhdmq4VmnNOjNnZYCU80/WoR02kSa3m5rZzqPW/i/2apWnqqhuLppQL1BT7PyAQdyQ+8FrhY=) 2025-08-29 16:55:48.980578 | orchestrator | 2025-08-29 16:55:48.980591 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 16:55:48.980604 | orchestrator | Friday 29 August 2025 16:55:37 +0000 (0:00:01.127) 0:00:13.379 ********* 2025-08-29 16:55:48.980615 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6eyYP/bUW8+yt6Nz9tPv7x4QddYpLnHtXcP96R6zE0DDNVune29yKjEeahFZe2lKVAaKT6zgOhtB48gOJMyA356O0wh1+wJ0tw6dR8Lhzqzg8Idi7Q2kvXbajzkSm/56BmgjPtcBMmW5/nWDjVkRF8SgI2LUQAzTL9V/i5sg7qBwDfOF4+VtWXtGkfhro/yebvTAsck4KAcs+wNETtC/XXzSAnsJD4EIBQXAr5kghdtVObJcGwDWeTb/Of9QzNY7BSYvk6ypAPBXL7cGVp8aTr4YWyoABYqaMdj5QyFUnkieoPCd5z8qVz0Ig9Ng1bkzzAKlrUqnL8YUY12oRgfYV0kaKU8SlDroDnzPUmBxt30C0ua0UlM2Bik109IjuWfjKThHupGFvZHcu+X29AgWycIiH7rPyTUHsPQSAZPgHIvKQCjPs77KDa+cqmNE3yfcloHZ5LGtF1AOvkD7rBoZyLDU7DsEJ6YuzH/juTQjiL+Zq+pdA++jjZWqW/xOmPHc=) 2025-08-29 16:55:48.980627 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAEJTDUZGgKcyh6A3Bz1iI7pkFh5BqUHdd300xfu/gJwby/lEpi8NahgyH8/UEx8k9RL/tDROECA6fF+X09bq6Q=) 2025-08-29 16:55:48.980638 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKvUNGUsL00KMRSnSsaLH3KtT3MD6v+YcQlwYuLOqWIk) 2025-08-29 16:55:48.980651 | orchestrator | 2025-08-29 16:55:48.980662 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-08-29 16:55:48.980683 | orchestrator | Friday 29 August 2025 16:55:38 +0000 (0:00:01.084) 0:00:14.464 ********* 2025-08-29 16:55:48.980696 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-08-29 16:55:48.980707 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-08-29 16:55:48.980718 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-08-29 16:55:48.980728 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-08-29 16:55:48.980739 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-08-29 16:55:48.980750 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-08-29 16:55:48.980789 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-08-29 16:55:48.980800 | orchestrator | 2025-08-29 16:55:48.980811 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-08-29 16:55:48.980823 | orchestrator | Friday 29 August 2025 16:55:44 +0000 (0:00:05.519) 0:00:19.983 ********* 2025-08-29 16:55:48.980835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-08-29 16:55:48.980847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-08-29 16:55:48.980858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-08-29 16:55:48.980869 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-08-29 16:55:48.980888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-08-29 16:55:48.980899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-08-29 16:55:48.980910 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-08-29 16:55:48.980921 | orchestrator | 2025-08-29 16:55:48.980945 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 16:55:48.980958 | orchestrator | Friday 29 August 2025 16:55:44 +0000 (0:00:00.193) 0:00:20.177 ********* 2025-08-29 16:55:48.980971 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDgeBDSQ4X7SIvd5FP4zUyh/2jsEjrGGPOtSxsur23RkTKtSx+oDaAqiRd6uf5iHcsz/87WczLTLp1iC5ZxvSeL52ZnkdHtgwoCStsQJ9VOfze2ibBPLod91uYKMQEIaBZbVh0l1kykeOQ4kZl9R7YK2ETMEt6wExAKxXx9AeVGuTgt070hmoqO50GWIONFI6FCUiIGfUG2xr0GCee37Dl/6SrYhOoaOy+xqU1BHI8dpXp6GUj1cwxeBPTCuNg/sewyFgRHfX0/BSCge8x2CO3tVXMwtPs63PhCqacoaEQAK/le8+FbNt6Amw5ClXtv3dhwNXxG99TwTqDzlzMUuIRVOizU/DY1ZKfswVAjkOGbBAfB6SKqryEr6NZpVETNX1X+gCq9mI9ErTb6VrORLW+/3wPXkdjiw93JtT+qKApqJsHtfJFDgt5hxkEmRExkNJwNHPfvA3IoSkx20GjAh+KIyLfPRu8cF8R1eQpb7pONv8qkB1E8y0Avkp0n+1TbzLE=) 2025-08-29 16:55:48.980983 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEat77JAG5bsfxOOe8AzwbZzZRZop1ai88mwUXZg5WG0MoeUELvMf1IHStfJv2tX3i/HLWLEQMHJay6sIEDf8IE=) 2025-08-29 16:55:48.980994 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPGXTH+SfweOznF48UK7jqcbiMzMp7rDfSYcwx8bLzXZ) 2025-08-29 16:55:48.981005 | orchestrator | 2025-08-29 16:55:48.981016 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 16:55:48.981027 | orchestrator | Friday 29 August 2025 16:55:45 +0000 (0:00:01.132) 0:00:21.310 ********* 2025-08-29 16:55:48.981038 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8iIknOp9pKkv56U2iSdMKoF2YyxQ3b6mYGgN9pSZqs+DF2KLIEgQJ/CUiRvuOvFG5bDEJ/fCLOk0NRoiX8b2C0n5RkGMvcPiUagDQxDouBgqhVnIRKufveeZPz91rKLx8FG/iH+Z6cmVM2qUHFQsbptOXt6carbseZhtNgeYK/hfiLqmuVZh8/24uJn7lVqZ6ow4/aMahLyjqkmelnVV8u9lHfk5R+6Hq0onn+G3myK4U01Oq3QWU/5UvGj0vM0JJn249V64WJSBfTEoJkxM8VaudQ/eTX9LMXQgnmX4iB73CB7b7X+dMSThCK1lUNqrdO8DgEyoii/RvokXSHaRULGsr6k6TZz3GwvXR0iFxprvceDG9b/F71DrCu61S7BI/xaggi0VofefPrwetqWoOAXAUCZV9LLmUHB+hv8Y6hE9iDv6tEmYH61hB5suT59oC57PDgHSmyLVXnI9yNfboRjgzowBejX7UmQxtKwodHNoObWpDQA8ox3SN9BbiPWk=) 2025-08-29 16:55:48.981049 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA6YeSOrhInyy2983Q+DwixXqaPEhhsTWsqB+700WXz6) 2025-08-29 16:55:48.981060 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBLOa1l7t/DozFS5HpYS7AblfuQdnXcsAvFIsQ+UE3yCDEz25ejZN4FYD1bnCp/7y5HuiP+IaJ7PXRMubIPkVI0=) 2025-08-29 16:55:48.981071 | orchestrator | 2025-08-29 16:55:48.981081 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 16:55:48.981092 | orchestrator | Friday 29 August 2025 16:55:46 +0000 (0:00:01.141) 0:00:22.451 ********* 2025-08-29 16:55:48.981109 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1edKhdnjrTFB5UALO2SbQwpgls3X7qWCiC4X+Jw2WA9/os+xEUjT86jiL8KHwqMYl9t/wbi175hiFBqMlpPOGVzK345UhVoG3Pf199m/Rszao0jM1VQY2zQ9KAcxNouX1LjrlaQ4VSw+14ERyU6/d1T7i8VyvyJ3327/oiqkQzyWAV2upQrSgm0xD1Qsi+y431YUBNECzp+lA2wRPZgNV9D1UKqzduI7g0Zdk8RaaKpoSDpZqyHIBgdD90jCev8ZN6wbadWIDY0c75SmWM8CaN6tyoZTL65qyujyOETDfY5/UW2t1orU1mP3yIIjaOSQGtMZdbnCyTqYIx/iRnRkvLxZsGUR8A6e/UZUregnWuJniZvymKVpmUFBsYF4SCQKjk991vgxXEDrE6XCaW/k6z8AEQzslL9phJmumINPf3e4xpn7CkA6MURMUb/z/6IzcGH1wcOwm//R3Gz9vyBgEO/1OVOD8fasCCOkPy2t5wRnH1Go159DujY12zbxMOQM=) 2025-08-29 16:55:48.981219 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOu6HRsgjMfAdN1CZFplOp3I+j7i751pf9O/Gvog28PTmCUl6oP8vDSbabbR1KE2fbbAl/KVMNMSNzJ0/8J8xec=) 2025-08-29 16:55:48.981232 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB/HyJPXZUnGa6b6JmOYZnHySf1zMLjewsBtdginx0Dr) 2025-08-29 16:55:48.981243 | orchestrator | 2025-08-29 16:55:48.981254 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 16:55:48.981265 | orchestrator | Friday 29 August 2025 16:55:47 +0000 (0:00:01.056) 0:00:23.508 ********* 2025-08-29 16:55:48.981276 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBdkHucxFGxEuKxzwDooyhCrlO3+TB4FsM4gSW1iS4j76tqRMwAG0mIQ7mvCwb0b6hAvfBw6r8NT4LoB78L0yKs=) 2025-08-29 16:55:48.981309 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCb+8mxSHAiSXinLHFBS5NSMLaJdnFOft7VEnlSPOffNXpGPcwLl5nXRY5Tg4MgYzbsJLXMgOwHJEA66ymkcsx82EQptKiwWxhZZNo/G3JOvNRZOpe8z0D1SWXkGlZudJnyb+ZeBDl+3RObXU08WgP6VCkr6w3FjRKJbL9fv93suXNHPDcHZWc/uUr8RdK+dap+BJ/u8MD58Z3vzzHAGGbaHiiHWccIT3Ikx+jbucYnH+f2yAZuEZPR047zasIbx3uxIQ4Xvb6nBMdLsa8vL+SibF/xu6YyrQiUzF8Ybkj4pt+206JAb5vFVpkOsw2WJEZA3GItTU3sGr2UeRPIG3alrXG8+5moucSleRv+cc3RpXyKO9O5Ca4IVgYg8BylgPxMjpWgckhDfkReHbpD8l4OY9rbTUDYOjVdNdbyaZL6LhD7uhXCDwGgu6NC8CJTfJUiH2Qo7BgPjmj6fQ4kJ6uclwfGr1RMxY2uNU6P/QZkGFz2SNv0grt/cY2azLTiF/0=) 2025-08-29 16:55:53.373845 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKfK9qDn9seL6Ej898MleuxRi8HzXHdaI4cYtxL3FAku) 2025-08-29 16:55:53.373941 | orchestrator | 2025-08-29 16:55:53.373957 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 16:55:53.373970 | orchestrator | Friday 29 August 2025 16:55:48 +0000 (0:00:01.154) 0:00:24.662 ********* 2025-08-29 16:55:53.373982 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKzgcfrLDgsqfHFoFYncdnX5CqrHUe5z/zn5w5U59OF6) 2025-08-29 16:55:53.373995 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCb+Z7SzLT6DOpOZrEksiVPu4he62zyZ1DblmyswwXoq0dDws8i8g/qPoIm2DkCrSrRzKL47VUrxIjLGcWF7t7oFugegBNIitHkKlujzULAeL9oplgPFVZIhGx+y2QT4QrCDNpRnWcAmJZq1ZZD7LA71vJ5+aLBxykJZJXniD0eTneXqVApco8PaDkg8U0Vw1i8NadKic+Py/3yVHN1RCj/8IhpO7VEYBrCMWwnIqiBZrbLgszxZblEKZhcmV2avsKVZQVd8qY5cP/qOY0IC3db6oYzTwjRJft0FD7stP98sL2TL089N+MqRxTyq6JNrINFAN8puW/Ptajc4q7o0HRhpkWi69wWXP8lb+S4Z6lmZX3ADw4RERxb8IK8FbRgit5UIJlH2yemqDZwDyin7iPLTzKfjuys37/BFb7NkZV/BfclswXhds2kM3O1M5G9HNQ5oedm2KNw5dy6X6Bk9Tn94YbuD+i5rdaIhlgQPAXxhVqB1QgGA2XciaHhuJJmCVM=) 2025-08-29 16:55:53.374011 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAj7wNI9q5UuQc21/tiVtG7UFH/HU4vDlLgp129AVh0LSG8RclfJVHen0dTnXt8JIYttDTyxnCO+U9JVl96L9Ys=) 2025-08-29 16:55:53.374070 | orchestrator | 2025-08-29 16:55:53.374083 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 16:55:53.374094 | orchestrator | Friday 29 August 2025 16:55:50 +0000 (0:00:01.083) 0:00:25.746 ********* 2025-08-29 16:55:53.374105 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCV2y2GIKnoNOcmgyi0K08KvTfDSnZ1sRpyCnCzwFoX4EJ6/+RWRQKYjA9NclljzDcIKXJcpOYfsXWPMeUbkUsIlX2lGfYbA7kd/w7y0DHyHZPiB1mDpnTkaSdav8E8scW+liIrVkv9G07dFX4SfWgsi7BCsHzhjN7pKuDBkzNlLSeFTq5ggtI3sGaXvo3FfZRimg0jzJpXqakIq6lhAUCHg3RlqWfjPMlWLbsk0qykbVED8/vRO9I17P7YeeyKEvJ/yIJnFCp2d9YHjeIs3Z4MgsSxUmkXymINL+G3hW/B0LleMk8PJe99ikVVEFaOFkryJlBj86mQgHXpJbkK4lX3bDKPpxTJBnc2bl3jj50nGSN6/efWpKGN9F2RBN4qvLOOonEXjQ2xBXkGZxvfJ7XgtY3G+QC3Ev05/qzB9y2AZ+AyVsm+qyxy4wolNKwtgs6lMHEzUTWWyxEUHpQ1jA3MrxhTRndUrYIpbX1EkRf7MfAZ6sMhoP0Qx4kMCt2JebE=) 2025-08-29 16:55:53.374142 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK72IaWhmAQwH5kJhdmq4VmnNOjNnZYCU80/WoR02kSa3m5rZzqPW/i/2apWnqqhuLppQL1BT7PyAQdyQ+8FrhY=) 2025-08-29 16:55:53.374154 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHbp9YWKNGAxRRkVDbsAa+g/RESgHmFl34wPaXw9qXmC) 2025-08-29 16:55:53.374165 | orchestrator | 2025-08-29 16:55:53.374176 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 16:55:53.374187 | orchestrator | Friday 29 August 2025 16:55:51 +0000 (0:00:01.090) 0:00:26.836 ********* 2025-08-29 16:55:53.374198 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6eyYP/bUW8+yt6Nz9tPv7x4QddYpLnHtXcP96R6zE0DDNVune29yKjEeahFZe2lKVAaKT6zgOhtB48gOJMyA356O0wh1+wJ0tw6dR8Lhzqzg8Idi7Q2kvXbajzkSm/56BmgjPtcBMmW5/nWDjVkRF8SgI2LUQAzTL9V/i5sg7qBwDfOF4+VtWXtGkfhro/yebvTAsck4KAcs+wNETtC/XXzSAnsJD4EIBQXAr5kghdtVObJcGwDWeTb/Of9QzNY7BSYvk6ypAPBXL7cGVp8aTr4YWyoABYqaMdj5QyFUnkieoPCd5z8qVz0Ig9Ng1bkzzAKlrUqnL8YUY12oRgfYV0kaKU8SlDroDnzPUmBxt30C0ua0UlM2Bik109IjuWfjKThHupGFvZHcu+X29AgWycIiH7rPyTUHsPQSAZPgHIvKQCjPs77KDa+cqmNE3yfcloHZ5LGtF1AOvkD7rBoZyLDU7DsEJ6YuzH/juTQjiL+Zq+pdA++jjZWqW/xOmPHc=) 2025-08-29 16:55:53.374210 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAEJTDUZGgKcyh6A3Bz1iI7pkFh5BqUHdd300xfu/gJwby/lEpi8NahgyH8/UEx8k9RL/tDROECA6fF+X09bq6Q=) 2025-08-29 16:55:53.374221 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKvUNGUsL00KMRSnSsaLH3KtT3MD6v+YcQlwYuLOqWIk) 2025-08-29 16:55:53.374232 | orchestrator | 2025-08-29 16:55:53.374244 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-08-29 16:55:53.374255 | orchestrator | Friday 29 August 2025 16:55:52 +0000 (0:00:01.148) 0:00:27.985 ********* 2025-08-29 16:55:53.374266 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-08-29 16:55:53.374278 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 16:55:53.374288 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-08-29 16:55:53.374299 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-08-29 16:55:53.374310 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-08-29 16:55:53.374338 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-08-29 16:55:53.374350 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-08-29 16:55:53.374376 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:55:53.374389 | orchestrator | 2025-08-29 16:55:53.374402 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-08-29 16:55:53.374419 | orchestrator | Friday 29 August 2025 16:55:52 +0000 (0:00:00.174) 0:00:28.160 ********* 2025-08-29 16:55:53.374432 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:55:53.374444 | orchestrator | 2025-08-29 16:55:53.374456 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-08-29 16:55:53.374469 | orchestrator | Friday 29 August 2025 16:55:52 +0000 (0:00:00.067) 0:00:28.227 ********* 2025-08-29 16:55:53.374481 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:55:53.374493 | orchestrator | 2025-08-29 16:55:53.374505 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-08-29 16:55:53.374517 | orchestrator | Friday 29 August 2025 16:55:52 +0000 (0:00:00.071) 0:00:28.299 ********* 2025-08-29 16:55:53.374530 | orchestrator | changed: [testbed-manager] 2025-08-29 16:55:53.374542 | orchestrator | 2025-08-29 16:55:53.374553 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 16:55:53.374566 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 16:55:53.374579 | orchestrator | 2025-08-29 16:55:53.374591 | orchestrator | 2025-08-29 16:55:53.374604 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 16:55:53.374621 | orchestrator | Friday 29 August 2025 16:55:53 +0000 (0:00:00.498) 0:00:28.797 ********* 2025-08-29 16:55:53.374633 | orchestrator | =============================================================================== 2025-08-29 16:55:53.374645 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.19s 2025-08-29 16:55:53.374657 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.52s 2025-08-29 16:55:53.374670 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.26s 2025-08-29 16:55:53.374682 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-08-29 16:55:53.374695 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-08-29 16:55:53.374707 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-08-29 16:55:53.374720 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-08-29 16:55:53.374731 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-08-29 16:55:53.374742 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-08-29 16:55:53.374752 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-08-29 16:55:53.374787 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-08-29 16:55:53.374798 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-08-29 16:55:53.374809 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-08-29 16:55:53.374820 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-08-29 16:55:53.374831 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-08-29 16:55:53.374841 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-08-29 16:55:53.374852 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.50s 2025-08-29 16:55:53.374863 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2025-08-29 16:55:53.374874 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-08-29 16:55:53.374886 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-08-29 16:55:53.722574 | orchestrator | + osism apply squid 2025-08-29 16:56:05.703533 | orchestrator | 2025-08-29 16:56:05 | INFO  | Task 27008651-51f6-4449-b514-4e9baa8cc105 (squid) was prepared for execution. 2025-08-29 16:56:05.703621 | orchestrator | 2025-08-29 16:56:05 | INFO  | It takes a moment until task 27008651-51f6-4449-b514-4e9baa8cc105 (squid) has been started and output is visible here. 2025-08-29 16:58:00.952259 | orchestrator | 2025-08-29 16:58:00.952400 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-08-29 16:58:00.952419 | orchestrator | 2025-08-29 16:58:00.952432 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-08-29 16:58:00.952443 | orchestrator | Friday 29 August 2025 16:56:09 +0000 (0:00:00.182) 0:00:00.182 ********* 2025-08-29 16:58:00.952455 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 16:58:00.952466 | orchestrator | 2025-08-29 16:58:00.952478 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-08-29 16:58:00.952489 | orchestrator | Friday 29 August 2025 16:56:09 +0000 (0:00:00.102) 0:00:00.285 ********* 2025-08-29 16:58:00.952500 | orchestrator | ok: [testbed-manager] 2025-08-29 16:58:00.952512 | orchestrator | 2025-08-29 16:58:00.952523 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-08-29 16:58:00.952535 | orchestrator | Friday 29 August 2025 16:56:11 +0000 (0:00:01.474) 0:00:01.760 ********* 2025-08-29 16:58:00.952570 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-08-29 16:58:00.952582 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-08-29 16:58:00.952593 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-08-29 16:58:00.952604 | orchestrator | 2025-08-29 16:58:00.952614 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-08-29 16:58:00.952625 | orchestrator | Friday 29 August 2025 16:56:12 +0000 (0:00:01.242) 0:00:03.002 ********* 2025-08-29 16:58:00.952636 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-08-29 16:58:00.952647 | orchestrator | 2025-08-29 16:58:00.952658 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-08-29 16:58:00.952669 | orchestrator | Friday 29 August 2025 16:56:13 +0000 (0:00:01.101) 0:00:04.103 ********* 2025-08-29 16:58:00.952680 | orchestrator | ok: [testbed-manager] 2025-08-29 16:58:00.952690 | orchestrator | 2025-08-29 16:58:00.952701 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-08-29 16:58:00.952713 | orchestrator | Friday 29 August 2025 16:56:14 +0000 (0:00:00.383) 0:00:04.487 ********* 2025-08-29 16:58:00.952725 | orchestrator | changed: [testbed-manager] 2025-08-29 16:58:00.952737 | orchestrator | 2025-08-29 16:58:00.952750 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-08-29 16:58:00.952762 | orchestrator | Friday 29 August 2025 16:56:14 +0000 (0:00:00.947) 0:00:05.434 ********* 2025-08-29 16:58:00.952774 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-08-29 16:58:00.952787 | orchestrator | ok: [testbed-manager] 2025-08-29 16:58:00.952800 | orchestrator | 2025-08-29 16:58:00.952812 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-08-29 16:58:00.952823 | orchestrator | Friday 29 August 2025 16:56:47 +0000 (0:00:32.409) 0:00:37.844 ********* 2025-08-29 16:58:00.952833 | orchestrator | changed: [testbed-manager] 2025-08-29 16:58:00.952844 | orchestrator | 2025-08-29 16:58:00.952855 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-08-29 16:58:00.952866 | orchestrator | Friday 29 August 2025 16:56:59 +0000 (0:00:12.437) 0:00:50.282 ********* 2025-08-29 16:58:00.952877 | orchestrator | Pausing for 60 seconds 2025-08-29 16:58:00.952888 | orchestrator | changed: [testbed-manager] 2025-08-29 16:58:00.952900 | orchestrator | 2025-08-29 16:58:00.952911 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-08-29 16:58:00.952944 | orchestrator | Friday 29 August 2025 16:57:59 +0000 (0:01:00.080) 0:01:50.363 ********* 2025-08-29 16:58:00.952956 | orchestrator | ok: [testbed-manager] 2025-08-29 16:58:00.952966 | orchestrator | 2025-08-29 16:58:00.952977 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-08-29 16:58:00.952988 | orchestrator | Friday 29 August 2025 16:57:59 +0000 (0:00:00.065) 0:01:50.428 ********* 2025-08-29 16:58:00.952999 | orchestrator | changed: [testbed-manager] 2025-08-29 16:58:00.953010 | orchestrator | 2025-08-29 16:58:00.953021 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 16:58:00.953031 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 16:58:00.953042 | orchestrator | 2025-08-29 16:58:00.953053 | orchestrator | 2025-08-29 16:58:00.953064 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 16:58:00.953075 | orchestrator | Friday 29 August 2025 16:58:00 +0000 (0:00:00.695) 0:01:51.124 ********* 2025-08-29 16:58:00.953086 | orchestrator | =============================================================================== 2025-08-29 16:58:00.953115 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-08-29 16:58:00.953127 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.41s 2025-08-29 16:58:00.953138 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.44s 2025-08-29 16:58:00.953149 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.47s 2025-08-29 16:58:00.953169 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.24s 2025-08-29 16:58:00.953180 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.10s 2025-08-29 16:58:00.953191 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2025-08-29 16:58:00.953202 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.70s 2025-08-29 16:58:00.953213 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2025-08-29 16:58:00.953224 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-08-29 16:58:00.953234 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-08-29 16:58:01.237237 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-08-29 16:58:01.237597 | orchestrator | ++ semver latest 9.0.0 2025-08-29 16:58:01.295768 | orchestrator | + [[ -1 -lt 0 ]] 2025-08-29 16:58:01.295831 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-08-29 16:58:01.296755 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-08-29 16:58:13.430362 | orchestrator | 2025-08-29 16:58:13 | INFO  | Task fd5b1615-dd06-4844-a92c-c2cd47a08c69 (operator) was prepared for execution. 2025-08-29 16:58:13.430513 | orchestrator | 2025-08-29 16:58:13 | INFO  | It takes a moment until task fd5b1615-dd06-4844-a92c-c2cd47a08c69 (operator) has been started and output is visible here. 2025-08-29 16:58:29.784580 | orchestrator | 2025-08-29 16:58:29.784723 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-08-29 16:58:29.784754 | orchestrator | 2025-08-29 16:58:29.784766 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 16:58:29.784778 | orchestrator | Friday 29 August 2025 16:58:17 +0000 (0:00:00.152) 0:00:00.152 ********* 2025-08-29 16:58:29.784789 | orchestrator | ok: [testbed-node-2] 2025-08-29 16:58:29.784813 | orchestrator | ok: [testbed-node-1] 2025-08-29 16:58:29.784824 | orchestrator | ok: [testbed-node-5] 2025-08-29 16:58:29.784835 | orchestrator | ok: [testbed-node-3] 2025-08-29 16:58:29.784846 | orchestrator | ok: [testbed-node-0] 2025-08-29 16:58:29.784856 | orchestrator | ok: [testbed-node-4] 2025-08-29 16:58:29.784867 | orchestrator | 2025-08-29 16:58:29.784878 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-08-29 16:58:29.784889 | orchestrator | Friday 29 August 2025 16:58:21 +0000 (0:00:03.734) 0:00:03.886 ********* 2025-08-29 16:58:29.784900 | orchestrator | ok: [testbed-node-1] 2025-08-29 16:58:29.784911 | orchestrator | ok: [testbed-node-0] 2025-08-29 16:58:29.784922 | orchestrator | ok: [testbed-node-2] 2025-08-29 16:58:29.784989 | orchestrator | ok: [testbed-node-4] 2025-08-29 16:58:29.785001 | orchestrator | ok: [testbed-node-3] 2025-08-29 16:58:29.785012 | orchestrator | ok: [testbed-node-5] 2025-08-29 16:58:29.785023 | orchestrator | 2025-08-29 16:58:29.785034 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-08-29 16:58:29.785048 | orchestrator | 2025-08-29 16:58:29.785059 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-08-29 16:58:29.785072 | orchestrator | Friday 29 August 2025 16:58:21 +0000 (0:00:00.758) 0:00:04.645 ********* 2025-08-29 16:58:29.785084 | orchestrator | ok: [testbed-node-0] 2025-08-29 16:58:29.785096 | orchestrator | ok: [testbed-node-1] 2025-08-29 16:58:29.785108 | orchestrator | ok: [testbed-node-2] 2025-08-29 16:58:29.785120 | orchestrator | ok: [testbed-node-3] 2025-08-29 16:58:29.785134 | orchestrator | ok: [testbed-node-4] 2025-08-29 16:58:29.785153 | orchestrator | ok: [testbed-node-5] 2025-08-29 16:58:29.785172 | orchestrator | 2025-08-29 16:58:29.785191 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-08-29 16:58:29.785210 | orchestrator | Friday 29 August 2025 16:58:22 +0000 (0:00:00.172) 0:00:04.818 ********* 2025-08-29 16:58:29.785228 | orchestrator | ok: [testbed-node-0] 2025-08-29 16:58:29.785244 | orchestrator | ok: [testbed-node-1] 2025-08-29 16:58:29.785261 | orchestrator | ok: [testbed-node-2] 2025-08-29 16:58:29.785279 | orchestrator | ok: [testbed-node-3] 2025-08-29 16:58:29.785326 | orchestrator | ok: [testbed-node-4] 2025-08-29 16:58:29.785347 | orchestrator | ok: [testbed-node-5] 2025-08-29 16:58:29.785367 | orchestrator | 2025-08-29 16:58:29.785386 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-08-29 16:58:29.785407 | orchestrator | Friday 29 August 2025 16:58:22 +0000 (0:00:00.173) 0:00:04.992 ********* 2025-08-29 16:58:29.785420 | orchestrator | changed: [testbed-node-0] 2025-08-29 16:58:29.785431 | orchestrator | changed: [testbed-node-5] 2025-08-29 16:58:29.785442 | orchestrator | changed: [testbed-node-3] 2025-08-29 16:58:29.785452 | orchestrator | changed: [testbed-node-2] 2025-08-29 16:58:29.785463 | orchestrator | changed: [testbed-node-1] 2025-08-29 16:58:29.785473 | orchestrator | changed: [testbed-node-4] 2025-08-29 16:58:29.785484 | orchestrator | 2025-08-29 16:58:29.785495 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-08-29 16:58:29.785506 | orchestrator | Friday 29 August 2025 16:58:22 +0000 (0:00:00.589) 0:00:05.582 ********* 2025-08-29 16:58:29.785516 | orchestrator | changed: [testbed-node-5] 2025-08-29 16:58:29.785527 | orchestrator | changed: [testbed-node-2] 2025-08-29 16:58:29.785538 | orchestrator | changed: [testbed-node-3] 2025-08-29 16:58:29.785548 | orchestrator | changed: [testbed-node-1] 2025-08-29 16:58:29.785559 | orchestrator | changed: [testbed-node-0] 2025-08-29 16:58:29.785569 | orchestrator | changed: [testbed-node-4] 2025-08-29 16:58:29.785580 | orchestrator | 2025-08-29 16:58:29.785590 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-08-29 16:58:29.785601 | orchestrator | Friday 29 August 2025 16:58:23 +0000 (0:00:00.842) 0:00:06.425 ********* 2025-08-29 16:58:29.785611 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-08-29 16:58:29.785623 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-08-29 16:58:29.785633 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-08-29 16:58:29.785644 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-08-29 16:58:29.785654 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-08-29 16:58:29.785665 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-08-29 16:58:29.785676 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-08-29 16:58:29.785686 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-08-29 16:58:29.785697 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-08-29 16:58:29.785707 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-08-29 16:58:29.785718 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-08-29 16:58:29.785728 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-08-29 16:58:29.785739 | orchestrator | 2025-08-29 16:58:29.785750 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-08-29 16:58:29.785760 | orchestrator | Friday 29 August 2025 16:58:24 +0000 (0:00:01.245) 0:00:07.670 ********* 2025-08-29 16:58:29.785771 | orchestrator | changed: [testbed-node-4] 2025-08-29 16:58:29.785781 | orchestrator | changed: [testbed-node-2] 2025-08-29 16:58:29.785792 | orchestrator | changed: [testbed-node-5] 2025-08-29 16:58:29.785802 | orchestrator | changed: [testbed-node-3] 2025-08-29 16:58:29.785813 | orchestrator | changed: [testbed-node-0] 2025-08-29 16:58:29.785823 | orchestrator | changed: [testbed-node-1] 2025-08-29 16:58:29.785834 | orchestrator | 2025-08-29 16:58:29.785844 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-08-29 16:58:29.785856 | orchestrator | Friday 29 August 2025 16:58:26 +0000 (0:00:01.267) 0:00:08.937 ********* 2025-08-29 16:58:29.785867 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-08-29 16:58:29.785877 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-08-29 16:58:29.785888 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-08-29 16:58:29.785899 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 16:58:29.785928 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 16:58:29.785980 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 16:58:29.785991 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 16:58:29.786002 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 16:58:29.786012 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 16:58:29.786064 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-08-29 16:58:29.786075 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-08-29 16:58:29.786086 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-08-29 16:58:29.786096 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-08-29 16:58:29.786107 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-08-29 16:58:29.786118 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-08-29 16:58:29.786129 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-08-29 16:58:29.786139 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-08-29 16:58:29.786150 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-08-29 16:58:29.786161 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-08-29 16:58:29.786172 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-08-29 16:58:29.786183 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-08-29 16:58:29.786193 | orchestrator | 2025-08-29 16:58:29.786204 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-08-29 16:58:29.786216 | orchestrator | Friday 29 August 2025 16:58:27 +0000 (0:00:01.352) 0:00:10.289 ********* 2025-08-29 16:58:29.786227 | orchestrator | skipping: [testbed-node-0] 2025-08-29 16:58:29.786238 | orchestrator | skipping: [testbed-node-1] 2025-08-29 16:58:29.786248 | orchestrator | skipping: [testbed-node-2] 2025-08-29 16:58:29.786259 | orchestrator | skipping: [testbed-node-3] 2025-08-29 16:58:29.786270 | orchestrator | skipping: [testbed-node-4] 2025-08-29 16:58:29.786280 | orchestrator | skipping: [testbed-node-5] 2025-08-29 16:58:29.786291 | orchestrator | 2025-08-29 16:58:29.786301 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-08-29 16:58:29.786312 | orchestrator | Friday 29 August 2025 16:58:27 +0000 (0:00:00.143) 0:00:10.433 ********* 2025-08-29 16:58:29.786323 | orchestrator | changed: [testbed-node-2] 2025-08-29 16:58:29.786344 | orchestrator | changed: [testbed-node-4] 2025-08-29 16:58:29.786355 | orchestrator | changed: [testbed-node-1] 2025-08-29 16:58:29.786365 | orchestrator | changed: [testbed-node-0] 2025-08-29 16:58:29.786376 | orchestrator | changed: [testbed-node-5] 2025-08-29 16:58:29.786389 | orchestrator | changed: [testbed-node-3] 2025-08-29 16:58:29.786408 | orchestrator | 2025-08-29 16:58:29.786426 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-08-29 16:58:29.786443 | orchestrator | Friday 29 August 2025 16:58:28 +0000 (0:00:00.685) 0:00:11.118 ********* 2025-08-29 16:58:29.786461 | orchestrator | skipping: [testbed-node-0] 2025-08-29 16:58:29.786477 | orchestrator | skipping: [testbed-node-1] 2025-08-29 16:58:29.786495 | orchestrator | skipping: [testbed-node-2] 2025-08-29 16:58:29.786514 | orchestrator | skipping: [testbed-node-3] 2025-08-29 16:58:29.786533 | orchestrator | skipping: [testbed-node-4] 2025-08-29 16:58:29.786552 | orchestrator | skipping: [testbed-node-5] 2025-08-29 16:58:29.786565 | orchestrator | 2025-08-29 16:58:29.786576 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-08-29 16:58:29.786587 | orchestrator | Friday 29 August 2025 16:58:28 +0000 (0:00:00.189) 0:00:11.308 ********* 2025-08-29 16:58:29.786598 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 16:58:29.786609 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-08-29 16:58:29.786619 | orchestrator | changed: [testbed-node-4] 2025-08-29 16:58:29.786630 | orchestrator | changed: [testbed-node-2] 2025-08-29 16:58:29.786653 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 16:58:29.786664 | orchestrator | changed: [testbed-node-0] 2025-08-29 16:58:29.786675 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 16:58:29.786686 | orchestrator | changed: [testbed-node-1] 2025-08-29 16:58:29.786697 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 16:58:29.786707 | orchestrator | changed: [testbed-node-5] 2025-08-29 16:58:29.786718 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 16:58:29.786729 | orchestrator | changed: [testbed-node-3] 2025-08-29 16:58:29.786739 | orchestrator | 2025-08-29 16:58:29.786750 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-08-29 16:58:29.786761 | orchestrator | Friday 29 August 2025 16:58:29 +0000 (0:00:00.728) 0:00:12.037 ********* 2025-08-29 16:58:29.786771 | orchestrator | skipping: [testbed-node-0] 2025-08-29 16:58:29.786782 | orchestrator | skipping: [testbed-node-1] 2025-08-29 16:58:29.786792 | orchestrator | skipping: [testbed-node-2] 2025-08-29 16:58:29.786803 | orchestrator | skipping: [testbed-node-3] 2025-08-29 16:58:29.786813 | orchestrator | skipping: [testbed-node-4] 2025-08-29 16:58:29.786824 | orchestrator | skipping: [testbed-node-5] 2025-08-29 16:58:29.786835 | orchestrator | 2025-08-29 16:58:29.786845 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-08-29 16:58:29.786856 | orchestrator | Friday 29 August 2025 16:58:29 +0000 (0:00:00.160) 0:00:12.197 ********* 2025-08-29 16:58:29.786866 | orchestrator | skipping: [testbed-node-0] 2025-08-29 16:58:29.786877 | orchestrator | skipping: [testbed-node-1] 2025-08-29 16:58:29.786888 | orchestrator | skipping: [testbed-node-2] 2025-08-29 16:58:29.786898 | orchestrator | skipping: [testbed-node-3] 2025-08-29 16:58:29.786909 | orchestrator | skipping: [testbed-node-4] 2025-08-29 16:58:29.786919 | orchestrator | skipping: [testbed-node-5] 2025-08-29 16:58:29.786953 | orchestrator | 2025-08-29 16:58:29.786966 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-08-29 16:58:29.786978 | orchestrator | Friday 29 August 2025 16:58:29 +0000 (0:00:00.160) 0:00:12.358 ********* 2025-08-29 16:58:29.786988 | orchestrator | skipping: [testbed-node-0] 2025-08-29 16:58:29.786999 | orchestrator | skipping: [testbed-node-1] 2025-08-29 16:58:29.787010 | orchestrator | skipping: [testbed-node-2] 2025-08-29 16:58:29.787021 | orchestrator | skipping: [testbed-node-3] 2025-08-29 16:58:29.787042 | orchestrator | skipping: [testbed-node-4] 2025-08-29 16:58:30.963274 | orchestrator | skipping: [testbed-node-5] 2025-08-29 16:58:30.963369 | orchestrator | 2025-08-29 16:58:30.963385 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-08-29 16:58:30.963398 | orchestrator | Friday 29 August 2025 16:58:29 +0000 (0:00:00.173) 0:00:12.531 ********* 2025-08-29 16:58:30.963410 | orchestrator | changed: [testbed-node-0] 2025-08-29 16:58:30.963420 | orchestrator | changed: [testbed-node-2] 2025-08-29 16:58:30.963431 | orchestrator | changed: [testbed-node-1] 2025-08-29 16:58:30.963442 | orchestrator | changed: [testbed-node-3] 2025-08-29 16:58:30.963453 | orchestrator | changed: [testbed-node-4] 2025-08-29 16:58:30.963463 | orchestrator | changed: [testbed-node-5] 2025-08-29 16:58:30.963474 | orchestrator | 2025-08-29 16:58:30.963485 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-08-29 16:58:30.963495 | orchestrator | Friday 29 August 2025 16:58:30 +0000 (0:00:00.672) 0:00:13.204 ********* 2025-08-29 16:58:30.963506 | orchestrator | skipping: [testbed-node-0] 2025-08-29 16:58:30.963517 | orchestrator | skipping: [testbed-node-1] 2025-08-29 16:58:30.963542 | orchestrator | skipping: [testbed-node-2] 2025-08-29 16:58:30.963554 | orchestrator | skipping: [testbed-node-3] 2025-08-29 16:58:30.963564 | orchestrator | skipping: [testbed-node-4] 2025-08-29 16:58:30.963575 | orchestrator | skipping: [testbed-node-5] 2025-08-29 16:58:30.963585 | orchestrator | 2025-08-29 16:58:30.963596 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 16:58:30.963663 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 16:58:30.963699 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 16:58:30.963711 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 16:58:30.963722 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 16:58:30.963732 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 16:58:30.963743 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 16:58:30.963754 | orchestrator | 2025-08-29 16:58:30.963764 | orchestrator | 2025-08-29 16:58:30.963775 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 16:58:30.963786 | orchestrator | Friday 29 August 2025 16:58:30 +0000 (0:00:00.258) 0:00:13.462 ********* 2025-08-29 16:58:30.963797 | orchestrator | =============================================================================== 2025-08-29 16:58:30.963808 | orchestrator | Gathering Facts --------------------------------------------------------- 3.73s 2025-08-29 16:58:30.963818 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.35s 2025-08-29 16:58:30.963830 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2025-08-29 16:58:30.963843 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.25s 2025-08-29 16:58:30.963855 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2025-08-29 16:58:30.963866 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2025-08-29 16:58:30.963878 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2025-08-29 16:58:30.963890 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.69s 2025-08-29 16:58:30.963902 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2025-08-29 16:58:30.963914 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.59s 2025-08-29 16:58:30.963926 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2025-08-29 16:58:30.963966 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2025-08-29 16:58:30.963979 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-08-29 16:58:30.963991 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2025-08-29 16:58:30.964003 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2025-08-29 16:58:30.964015 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-08-29 16:58:30.964028 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-08-29 16:58:30.964040 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2025-08-29 16:58:31.276177 | orchestrator | + osism apply --environment custom facts 2025-08-29 16:58:33.224416 | orchestrator | 2025-08-29 16:58:33 | INFO  | Trying to run play facts in environment custom 2025-08-29 16:58:43.482268 | orchestrator | 2025-08-29 16:58:43 | INFO  | Task 55e9a113-3460-4470-ba2e-3441e269c49a (facts) was prepared for execution. 2025-08-29 16:58:43.482393 | orchestrator | 2025-08-29 16:58:43 | INFO  | It takes a moment until task 55e9a113-3460-4470-ba2e-3441e269c49a (facts) has been started and output is visible here. 2025-08-29 16:59:30.263059 | orchestrator | 2025-08-29 16:59:30.263169 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-08-29 16:59:30.263213 | orchestrator | 2025-08-29 16:59:30.263227 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 16:59:30.263238 | orchestrator | Friday 29 August 2025 16:58:47 +0000 (0:00:00.089) 0:00:00.089 ********* 2025-08-29 16:59:30.263250 | orchestrator | ok: [testbed-manager] 2025-08-29 16:59:30.263262 | orchestrator | changed: [testbed-node-4] 2025-08-29 16:59:30.263289 | orchestrator | changed: [testbed-node-2] 2025-08-29 16:59:30.263300 | orchestrator | changed: [testbed-node-0] 2025-08-29 16:59:30.263311 | orchestrator | changed: [testbed-node-1] 2025-08-29 16:59:30.263322 | orchestrator | changed: [testbed-node-3] 2025-08-29 16:59:30.263333 | orchestrator | changed: [testbed-node-5] 2025-08-29 16:59:30.263344 | orchestrator | 2025-08-29 16:59:30.263354 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-08-29 16:59:30.263365 | orchestrator | Friday 29 August 2025 16:58:48 +0000 (0:00:01.508) 0:00:01.597 ********* 2025-08-29 16:59:30.263376 | orchestrator | ok: [testbed-manager] 2025-08-29 16:59:30.263387 | orchestrator | changed: [testbed-node-2] 2025-08-29 16:59:30.263398 | orchestrator | changed: [testbed-node-1] 2025-08-29 16:59:30.263409 | orchestrator | changed: [testbed-node-4] 2025-08-29 16:59:30.263420 | orchestrator | changed: [testbed-node-3] 2025-08-29 16:59:30.263430 | orchestrator | changed: [testbed-node-5] 2025-08-29 16:59:30.263441 | orchestrator | changed: [testbed-node-0] 2025-08-29 16:59:30.263451 | orchestrator | 2025-08-29 16:59:30.263462 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-08-29 16:59:30.263473 | orchestrator | 2025-08-29 16:59:30.263483 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 16:59:30.263494 | orchestrator | Friday 29 August 2025 16:58:50 +0000 (0:00:01.173) 0:00:02.771 ********* 2025-08-29 16:59:30.263505 | orchestrator | ok: [testbed-node-3] 2025-08-29 16:59:30.263517 | orchestrator | ok: [testbed-node-4] 2025-08-29 16:59:30.263530 | orchestrator | ok: [testbed-node-5] 2025-08-29 16:59:30.263541 | orchestrator | 2025-08-29 16:59:30.263553 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 16:59:30.263566 | orchestrator | Friday 29 August 2025 16:58:50 +0000 (0:00:00.139) 0:00:02.910 ********* 2025-08-29 16:59:30.263578 | orchestrator | ok: [testbed-node-3] 2025-08-29 16:59:30.263590 | orchestrator | ok: [testbed-node-4] 2025-08-29 16:59:30.263602 | orchestrator | ok: [testbed-node-5] 2025-08-29 16:59:30.263614 | orchestrator | 2025-08-29 16:59:30.263644 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 16:59:30.263657 | orchestrator | Friday 29 August 2025 16:58:50 +0000 (0:00:00.222) 0:00:03.132 ********* 2025-08-29 16:59:30.263669 | orchestrator | ok: [testbed-node-3] 2025-08-29 16:59:30.263681 | orchestrator | ok: [testbed-node-4] 2025-08-29 16:59:30.263693 | orchestrator | ok: [testbed-node-5] 2025-08-29 16:59:30.263705 | orchestrator | 2025-08-29 16:59:30.263717 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 16:59:30.263729 | orchestrator | Friday 29 August 2025 16:58:50 +0000 (0:00:00.218) 0:00:03.351 ********* 2025-08-29 16:59:30.263756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 16:59:30.263771 | orchestrator | 2025-08-29 16:59:30.263783 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 16:59:30.263795 | orchestrator | Friday 29 August 2025 16:58:50 +0000 (0:00:00.146) 0:00:03.498 ********* 2025-08-29 16:59:30.263807 | orchestrator | ok: [testbed-node-3] 2025-08-29 16:59:30.263819 | orchestrator | ok: [testbed-node-5] 2025-08-29 16:59:30.263830 | orchestrator | ok: [testbed-node-4] 2025-08-29 16:59:30.263842 | orchestrator | 2025-08-29 16:59:30.263854 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 16:59:30.263866 | orchestrator | Friday 29 August 2025 16:58:51 +0000 (0:00:00.446) 0:00:03.944 ********* 2025-08-29 16:59:30.263877 | orchestrator | skipping: [testbed-node-3] 2025-08-29 16:59:30.263896 | orchestrator | skipping: [testbed-node-4] 2025-08-29 16:59:30.263906 | orchestrator | skipping: [testbed-node-5] 2025-08-29 16:59:30.263917 | orchestrator | 2025-08-29 16:59:30.263928 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 16:59:30.263938 | orchestrator | Friday 29 August 2025 16:58:51 +0000 (0:00:00.122) 0:00:04.066 ********* 2025-08-29 16:59:30.263949 | orchestrator | changed: [testbed-node-4] 2025-08-29 16:59:30.263960 | orchestrator | changed: [testbed-node-3] 2025-08-29 16:59:30.263990 | orchestrator | changed: [testbed-node-5] 2025-08-29 16:59:30.264002 | orchestrator | 2025-08-29 16:59:30.264013 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 16:59:30.264024 | orchestrator | Friday 29 August 2025 16:58:52 +0000 (0:00:01.034) 0:00:05.101 ********* 2025-08-29 16:59:30.264034 | orchestrator | ok: [testbed-node-3] 2025-08-29 16:59:30.264045 | orchestrator | ok: [testbed-node-4] 2025-08-29 16:59:30.264056 | orchestrator | ok: [testbed-node-5] 2025-08-29 16:59:30.264067 | orchestrator | 2025-08-29 16:59:30.264077 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 16:59:30.264088 | orchestrator | Friday 29 August 2025 16:58:52 +0000 (0:00:00.474) 0:00:05.575 ********* 2025-08-29 16:59:30.264099 | orchestrator | changed: [testbed-node-3] 2025-08-29 16:59:30.264110 | orchestrator | changed: [testbed-node-4] 2025-08-29 16:59:30.264121 | orchestrator | changed: [testbed-node-5] 2025-08-29 16:59:30.264132 | orchestrator | 2025-08-29 16:59:30.264143 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 16:59:30.264154 | orchestrator | Friday 29 August 2025 16:58:53 +0000 (0:00:01.113) 0:00:06.688 ********* 2025-08-29 16:59:30.264164 | orchestrator | changed: [testbed-node-5] 2025-08-29 16:59:30.264175 | orchestrator | changed: [testbed-node-4] 2025-08-29 16:59:30.264187 | orchestrator | changed: [testbed-node-3] 2025-08-29 16:59:30.264197 | orchestrator | 2025-08-29 16:59:30.264208 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-08-29 16:59:30.264219 | orchestrator | Friday 29 August 2025 16:59:12 +0000 (0:00:18.379) 0:00:25.068 ********* 2025-08-29 16:59:30.264229 | orchestrator | skipping: [testbed-node-3] 2025-08-29 16:59:30.264240 | orchestrator | skipping: [testbed-node-4] 2025-08-29 16:59:30.264251 | orchestrator | skipping: [testbed-node-5] 2025-08-29 16:59:30.264262 | orchestrator | 2025-08-29 16:59:30.264273 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-08-29 16:59:30.264301 | orchestrator | Friday 29 August 2025 16:59:12 +0000 (0:00:00.112) 0:00:25.180 ********* 2025-08-29 16:59:30.264313 | orchestrator | changed: [testbed-node-5] 2025-08-29 16:59:30.264324 | orchestrator | changed: [testbed-node-3] 2025-08-29 16:59:30.264334 | orchestrator | changed: [testbed-node-4] 2025-08-29 16:59:30.264345 | orchestrator | 2025-08-29 16:59:30.264356 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 16:59:30.264367 | orchestrator | Friday 29 August 2025 16:59:19 +0000 (0:00:07.495) 0:00:32.676 ********* 2025-08-29 16:59:30.264378 | orchestrator | ok: [testbed-node-4] 2025-08-29 16:59:30.264388 | orchestrator | ok: [testbed-node-3] 2025-08-29 16:59:30.264399 | orchestrator | ok: [testbed-node-5] 2025-08-29 16:59:30.264410 | orchestrator | 2025-08-29 16:59:30.264421 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-08-29 16:59:30.264431 | orchestrator | Friday 29 August 2025 16:59:20 +0000 (0:00:00.457) 0:00:33.133 ********* 2025-08-29 16:59:30.264442 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-08-29 16:59:30.264453 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-08-29 16:59:30.264470 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-08-29 16:59:30.264481 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-08-29 16:59:30.264492 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-08-29 16:59:30.264503 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-08-29 16:59:30.264514 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-08-29 16:59:30.264530 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-08-29 16:59:30.264541 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-08-29 16:59:30.264551 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-08-29 16:59:30.264562 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-08-29 16:59:30.264573 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-08-29 16:59:30.264584 | orchestrator | 2025-08-29 16:59:30.264594 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 16:59:30.264605 | orchestrator | Friday 29 August 2025 16:59:23 +0000 (0:00:03.547) 0:00:36.681 ********* 2025-08-29 16:59:30.264616 | orchestrator | ok: [testbed-node-5] 2025-08-29 16:59:30.264626 | orchestrator | ok: [testbed-node-4] 2025-08-29 16:59:30.264637 | orchestrator | ok: [testbed-node-3] 2025-08-29 16:59:30.264647 | orchestrator | 2025-08-29 16:59:30.264658 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 16:59:30.264669 | orchestrator | 2025-08-29 16:59:30.264680 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 16:59:30.264690 | orchestrator | Friday 29 August 2025 16:59:25 +0000 (0:00:01.228) 0:00:37.910 ********* 2025-08-29 16:59:30.264701 | orchestrator | ok: [testbed-node-1] 2025-08-29 16:59:30.264712 | orchestrator | ok: [testbed-node-2] 2025-08-29 16:59:30.264723 | orchestrator | ok: [testbed-node-0] 2025-08-29 16:59:30.264733 | orchestrator | ok: [testbed-node-5] 2025-08-29 16:59:30.264744 | orchestrator | ok: [testbed-node-4] 2025-08-29 16:59:30.264754 | orchestrator | ok: [testbed-node-3] 2025-08-29 16:59:30.264765 | orchestrator | ok: [testbed-manager] 2025-08-29 16:59:30.264775 | orchestrator | 2025-08-29 16:59:30.264786 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 16:59:30.264797 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 16:59:30.264808 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 16:59:30.264821 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 16:59:30.264831 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 16:59:30.264842 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 16:59:30.264853 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 16:59:30.264864 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 16:59:30.264874 | orchestrator | 2025-08-29 16:59:30.264885 | orchestrator | 2025-08-29 16:59:30.264896 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 16:59:30.264907 | orchestrator | Friday 29 August 2025 16:59:30 +0000 (0:00:05.023) 0:00:42.933 ********* 2025-08-29 16:59:30.264917 | orchestrator | =============================================================================== 2025-08-29 16:59:30.264928 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.38s 2025-08-29 16:59:30.264938 | orchestrator | Install required packages (Debian) -------------------------------------- 7.50s 2025-08-29 16:59:30.264949 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.02s 2025-08-29 16:59:30.264960 | orchestrator | Copy fact files --------------------------------------------------------- 3.55s 2025-08-29 16:59:30.264994 | orchestrator | Create custom facts directory ------------------------------------------- 1.51s 2025-08-29 16:59:30.265006 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.23s 2025-08-29 16:59:30.265023 | orchestrator | Copy fact file ---------------------------------------------------------- 1.17s 2025-08-29 16:59:30.501520 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.11s 2025-08-29 16:59:30.501611 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2025-08-29 16:59:30.501624 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-08-29 16:59:30.501636 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2025-08-29 16:59:30.501646 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-08-29 16:59:30.501658 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2025-08-29 16:59:30.501669 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2025-08-29 16:59:30.501679 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-08-29 16:59:30.501691 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.14s 2025-08-29 16:59:30.501701 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-08-29 16:59:30.501712 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-08-29 16:59:30.829444 | orchestrator | + osism apply bootstrap 2025-08-29 16:59:42.858121 | orchestrator | 2025-08-29 16:59:42 | INFO  | Task ee8c1bdc-8864-4cc2-b90e-8113709f90cf (bootstrap) was prepared for execution. 2025-08-29 16:59:42.858248 | orchestrator | 2025-08-29 16:59:42 | INFO  | It takes a moment until task ee8c1bdc-8864-4cc2-b90e-8113709f90cf (bootstrap) has been started and output is visible here. 2025-08-29 16:59:59.130556 | orchestrator | 2025-08-29 16:59:59.130636 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-08-29 16:59:59.130643 | orchestrator | 2025-08-29 16:59:59.130647 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-08-29 16:59:59.130652 | orchestrator | Friday 29 August 2025 16:59:47 +0000 (0:00:00.165) 0:00:00.165 ********* 2025-08-29 16:59:59.130656 | orchestrator | ok: [testbed-manager] 2025-08-29 16:59:59.130661 | orchestrator | ok: [testbed-node-0] 2025-08-29 16:59:59.130665 | orchestrator | ok: [testbed-node-1] 2025-08-29 16:59:59.130669 | orchestrator | ok: [testbed-node-2] 2025-08-29 16:59:59.130673 | orchestrator | ok: [testbed-node-3] 2025-08-29 16:59:59.130677 | orchestrator | ok: [testbed-node-4] 2025-08-29 16:59:59.130681 | orchestrator | ok: [testbed-node-5] 2025-08-29 16:59:59.130685 | orchestrator | 2025-08-29 16:59:59.130688 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 16:59:59.130692 | orchestrator | 2025-08-29 16:59:59.130696 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 16:59:59.130700 | orchestrator | Friday 29 August 2025 16:59:47 +0000 (0:00:00.228) 0:00:00.394 ********* 2025-08-29 16:59:59.130704 | orchestrator | ok: [testbed-node-0] 2025-08-29 16:59:59.130707 | orchestrator | ok: [testbed-node-1] 2025-08-29 16:59:59.130711 | orchestrator | ok: [testbed-node-2] 2025-08-29 16:59:59.130715 | orchestrator | ok: [testbed-manager] 2025-08-29 16:59:59.130718 | orchestrator | ok: [testbed-node-4] 2025-08-29 16:59:59.130722 | orchestrator | ok: [testbed-node-5] 2025-08-29 16:59:59.130726 | orchestrator | ok: [testbed-node-3] 2025-08-29 16:59:59.130729 | orchestrator | 2025-08-29 16:59:59.130733 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-08-29 16:59:59.130737 | orchestrator | 2025-08-29 16:59:59.130751 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 16:59:59.130755 | orchestrator | Friday 29 August 2025 16:59:51 +0000 (0:00:03.775) 0:00:04.170 ********* 2025-08-29 16:59:59.130760 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-08-29 16:59:59.130777 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 16:59:59.130781 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-08-29 16:59:59.130784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-08-29 16:59:59.130788 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-08-29 16:59:59.130792 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-08-29 16:59:59.130795 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-08-29 16:59:59.130799 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-08-29 16:59:59.130803 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 16:59:59.130807 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-08-29 16:59:59.130810 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-08-29 16:59:59.130814 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-08-29 16:59:59.130818 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-08-29 16:59:59.130821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-08-29 16:59:59.130825 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 16:59:59.130829 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-08-29 16:59:59.130833 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-08-29 16:59:59.130836 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-08-29 16:59:59.130840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 16:59:59.130844 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 16:59:59.130847 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-08-29 16:59:59.130851 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-08-29 16:59:59.130855 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-08-29 16:59:59.130859 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-08-29 16:59:59.130862 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 16:59:59.130866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 16:59:59.130869 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-08-29 16:59:59.130873 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-08-29 16:59:59.130877 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-08-29 16:59:59.130880 | orchestrator | skipping: [testbed-node-2] 2025-08-29 16:59:59.130884 | orchestrator | skipping: [testbed-node-1] 2025-08-29 16:59:59.130888 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 16:59:59.130891 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-08-29 16:59:59.130895 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-08-29 16:59:59.130899 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 16:59:59.130902 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:59:59.130906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 16:59:59.130913 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 16:59:59.130916 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 16:59:59.130920 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 16:59:59.130924 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 16:59:59.130927 | orchestrator | skipping: [testbed-node-0] 2025-08-29 16:59:59.130931 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 16:59:59.130935 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 16:59:59.130938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 16:59:59.130942 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-08-29 16:59:59.130959 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 16:59:59.130963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 16:59:59.130980 | orchestrator | skipping: [testbed-node-3] 2025-08-29 16:59:59.130984 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-08-29 16:59:59.130988 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-08-29 16:59:59.130992 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-08-29 16:59:59.130996 | orchestrator | skipping: [testbed-node-4] 2025-08-29 16:59:59.130999 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-08-29 16:59:59.131003 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-08-29 16:59:59.131007 | orchestrator | skipping: [testbed-node-5] 2025-08-29 16:59:59.131011 | orchestrator | 2025-08-29 16:59:59.131015 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-08-29 16:59:59.131018 | orchestrator | 2025-08-29 16:59:59.131022 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-08-29 16:59:59.131026 | orchestrator | Friday 29 August 2025 16:59:51 +0000 (0:00:00.499) 0:00:04.670 ********* 2025-08-29 16:59:59.131030 | orchestrator | ok: [testbed-node-2] 2025-08-29 16:59:59.131033 | orchestrator | ok: [testbed-manager] 2025-08-29 16:59:59.131037 | orchestrator | ok: [testbed-node-1] 2025-08-29 16:59:59.131041 | orchestrator | ok: [testbed-node-4] 2025-08-29 16:59:59.131045 | orchestrator | ok: [testbed-node-5] 2025-08-29 16:59:59.131048 | orchestrator | ok: [testbed-node-3] 2025-08-29 16:59:59.131052 | orchestrator | ok: [testbed-node-0] 2025-08-29 16:59:59.131056 | orchestrator | 2025-08-29 16:59:59.131059 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-08-29 16:59:59.131063 | orchestrator | Friday 29 August 2025 16:59:52 +0000 (0:00:01.214) 0:00:05.884 ********* 2025-08-29 16:59:59.131067 | orchestrator | ok: [testbed-manager] 2025-08-29 16:59:59.131070 | orchestrator | ok: [testbed-node-1] 2025-08-29 16:59:59.131074 | orchestrator | ok: [testbed-node-0] 2025-08-29 16:59:59.131078 | orchestrator | ok: [testbed-node-3] 2025-08-29 16:59:59.131081 | orchestrator | ok: [testbed-node-5] 2025-08-29 16:59:59.131085 | orchestrator | ok: [testbed-node-2] 2025-08-29 16:59:59.131089 | orchestrator | ok: [testbed-node-4] 2025-08-29 16:59:59.131092 | orchestrator | 2025-08-29 16:59:59.131096 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-08-29 16:59:59.131100 | orchestrator | Friday 29 August 2025 16:59:54 +0000 (0:00:01.431) 0:00:07.316 ********* 2025-08-29 16:59:59.131104 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 16:59:59.131110 | orchestrator | 2025-08-29 16:59:59.131114 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-08-29 16:59:59.131117 | orchestrator | Friday 29 August 2025 16:59:54 +0000 (0:00:00.303) 0:00:07.620 ********* 2025-08-29 16:59:59.131121 | orchestrator | changed: [testbed-manager] 2025-08-29 16:59:59.131125 | orchestrator | changed: [testbed-node-4] 2025-08-29 16:59:59.131128 | orchestrator | changed: [testbed-node-1] 2025-08-29 16:59:59.131132 | orchestrator | changed: [testbed-node-3] 2025-08-29 16:59:59.131136 | orchestrator | changed: [testbed-node-2] 2025-08-29 16:59:59.131140 | orchestrator | changed: [testbed-node-5] 2025-08-29 16:59:59.131144 | orchestrator | changed: [testbed-node-0] 2025-08-29 16:59:59.131148 | orchestrator | 2025-08-29 16:59:59.131152 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-08-29 16:59:59.131157 | orchestrator | Friday 29 August 2025 16:59:56 +0000 (0:00:02.070) 0:00:09.690 ********* 2025-08-29 16:59:59.131161 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:59:59.131166 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 16:59:59.131176 | orchestrator | 2025-08-29 16:59:59.131180 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-08-29 16:59:59.131184 | orchestrator | Friday 29 August 2025 16:59:56 +0000 (0:00:00.295) 0:00:09.985 ********* 2025-08-29 16:59:59.131189 | orchestrator | changed: [testbed-node-2] 2025-08-29 16:59:59.131193 | orchestrator | changed: [testbed-node-1] 2025-08-29 16:59:59.131197 | orchestrator | changed: [testbed-node-0] 2025-08-29 16:59:59.131201 | orchestrator | changed: [testbed-node-4] 2025-08-29 16:59:59.131205 | orchestrator | changed: [testbed-node-3] 2025-08-29 16:59:59.131209 | orchestrator | changed: [testbed-node-5] 2025-08-29 16:59:59.131213 | orchestrator | 2025-08-29 16:59:59.131218 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-08-29 16:59:59.131222 | orchestrator | Friday 29 August 2025 16:59:57 +0000 (0:00:01.074) 0:00:11.060 ********* 2025-08-29 16:59:59.131226 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:59:59.131230 | orchestrator | changed: [testbed-node-2] 2025-08-29 16:59:59.131234 | orchestrator | changed: [testbed-node-4] 2025-08-29 16:59:59.131238 | orchestrator | changed: [testbed-node-1] 2025-08-29 16:59:59.131242 | orchestrator | changed: [testbed-node-0] 2025-08-29 16:59:59.131247 | orchestrator | changed: [testbed-node-5] 2025-08-29 16:59:59.131251 | orchestrator | changed: [testbed-node-3] 2025-08-29 16:59:59.131255 | orchestrator | 2025-08-29 16:59:59.131259 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-08-29 16:59:59.131263 | orchestrator | Friday 29 August 2025 16:59:58 +0000 (0:00:00.568) 0:00:11.629 ********* 2025-08-29 16:59:59.131268 | orchestrator | skipping: [testbed-node-0] 2025-08-29 16:59:59.131272 | orchestrator | skipping: [testbed-node-1] 2025-08-29 16:59:59.131276 | orchestrator | skipping: [testbed-node-2] 2025-08-29 16:59:59.131280 | orchestrator | skipping: [testbed-node-3] 2025-08-29 16:59:59.131286 | orchestrator | skipping: [testbed-node-4] 2025-08-29 16:59:59.131292 | orchestrator | skipping: [testbed-node-5] 2025-08-29 16:59:59.131298 | orchestrator | ok: [testbed-manager] 2025-08-29 16:59:59.131304 | orchestrator | 2025-08-29 16:59:59.131314 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-08-29 16:59:59.131322 | orchestrator | Friday 29 August 2025 16:59:58 +0000 (0:00:00.459) 0:00:12.088 ********* 2025-08-29 16:59:59.131328 | orchestrator | skipping: [testbed-manager] 2025-08-29 16:59:59.131334 | orchestrator | skipping: [testbed-node-0] 2025-08-29 16:59:59.131344 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:00:11.380691 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:00:11.380768 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:00:11.380781 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:00:11.380793 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:00:11.380804 | orchestrator | 2025-08-29 17:00:11.380817 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-08-29 17:00:11.380829 | orchestrator | Friday 29 August 2025 16:59:59 +0000 (0:00:00.249) 0:00:12.338 ********* 2025-08-29 17:00:11.380841 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:00:11.380854 | orchestrator | 2025-08-29 17:00:11.380866 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-08-29 17:00:11.380877 | orchestrator | Friday 29 August 2025 16:59:59 +0000 (0:00:00.318) 0:00:12.657 ********* 2025-08-29 17:00:11.380888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:00:11.380900 | orchestrator | 2025-08-29 17:00:11.380911 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-08-29 17:00:11.380922 | orchestrator | Friday 29 August 2025 16:59:59 +0000 (0:00:00.309) 0:00:12.967 ********* 2025-08-29 17:00:11.380953 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:11.380987 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:11.380999 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:11.381010 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:11.381020 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:11.381031 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:11.381042 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:11.381052 | orchestrator | 2025-08-29 17:00:11.381064 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-08-29 17:00:11.381075 | orchestrator | Friday 29 August 2025 17:00:01 +0000 (0:00:01.367) 0:00:14.335 ********* 2025-08-29 17:00:11.381085 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:00:11.381096 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:00:11.381107 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:00:11.381117 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:00:11.381128 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:00:11.381139 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:00:11.381150 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:00:11.381161 | orchestrator | 2025-08-29 17:00:11.381172 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-08-29 17:00:11.381182 | orchestrator | Friday 29 August 2025 17:00:01 +0000 (0:00:00.237) 0:00:14.572 ********* 2025-08-29 17:00:11.381193 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:11.381204 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:11.381215 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:11.381226 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:11.381237 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:11.381248 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:11.381258 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:11.381269 | orchestrator | 2025-08-29 17:00:11.381282 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-08-29 17:00:11.381294 | orchestrator | Friday 29 August 2025 17:00:02 +0000 (0:00:00.593) 0:00:15.165 ********* 2025-08-29 17:00:11.381307 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:00:11.381320 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:00:11.381332 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:00:11.381344 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:00:11.381356 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:00:11.381369 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:00:11.381382 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:00:11.381394 | orchestrator | 2025-08-29 17:00:11.381406 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-08-29 17:00:11.381420 | orchestrator | Friday 29 August 2025 17:00:02 +0000 (0:00:00.255) 0:00:15.421 ********* 2025-08-29 17:00:11.381433 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:11.381445 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:00:11.381456 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:00:11.381466 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:00:11.381519 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:00:11.381532 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:00:11.381542 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:00:11.381554 | orchestrator | 2025-08-29 17:00:11.381565 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-08-29 17:00:11.381575 | orchestrator | Friday 29 August 2025 17:00:02 +0000 (0:00:00.582) 0:00:16.003 ********* 2025-08-29 17:00:11.381586 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:11.381596 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:00:11.381607 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:00:11.381618 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:00:11.381628 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:00:11.381638 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:00:11.381649 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:00:11.381660 | orchestrator | 2025-08-29 17:00:11.381675 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-08-29 17:00:11.381694 | orchestrator | Friday 29 August 2025 17:00:04 +0000 (0:00:01.143) 0:00:17.147 ********* 2025-08-29 17:00:11.381705 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:11.381715 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:11.381726 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:11.381737 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:11.381748 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:11.381758 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:11.381769 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:11.381780 | orchestrator | 2025-08-29 17:00:11.381791 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-08-29 17:00:11.381802 | orchestrator | Friday 29 August 2025 17:00:05 +0000 (0:00:01.191) 0:00:18.338 ********* 2025-08-29 17:00:11.381826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:00:11.381838 | orchestrator | 2025-08-29 17:00:11.381849 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-08-29 17:00:11.381860 | orchestrator | Friday 29 August 2025 17:00:05 +0000 (0:00:00.426) 0:00:18.764 ********* 2025-08-29 17:00:11.381871 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:00:11.381882 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:00:11.381892 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:00:11.381903 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:00:11.381913 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:00:11.381924 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:00:11.381934 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:00:11.381945 | orchestrator | 2025-08-29 17:00:11.381956 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 17:00:11.381984 | orchestrator | Friday 29 August 2025 17:00:06 +0000 (0:00:01.271) 0:00:20.036 ********* 2025-08-29 17:00:11.381995 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:11.382006 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:11.382057 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:11.382071 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:11.382082 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:11.382092 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:11.382103 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:11.382114 | orchestrator | 2025-08-29 17:00:11.382125 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 17:00:11.382136 | orchestrator | Friday 29 August 2025 17:00:07 +0000 (0:00:00.268) 0:00:20.305 ********* 2025-08-29 17:00:11.382147 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:11.382157 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:11.382168 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:11.382179 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:11.382189 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:11.382200 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:11.382211 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:11.382221 | orchestrator | 2025-08-29 17:00:11.382232 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 17:00:11.382243 | orchestrator | Friday 29 August 2025 17:00:07 +0000 (0:00:00.213) 0:00:20.518 ********* 2025-08-29 17:00:11.382254 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:11.382265 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:11.382275 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:11.382286 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:11.382296 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:11.382307 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:11.382318 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:11.382328 | orchestrator | 2025-08-29 17:00:11.382339 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 17:00:11.382350 | orchestrator | Friday 29 August 2025 17:00:07 +0000 (0:00:00.222) 0:00:20.740 ********* 2025-08-29 17:00:11.382368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:00:11.382381 | orchestrator | 2025-08-29 17:00:11.382392 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 17:00:11.382402 | orchestrator | Friday 29 August 2025 17:00:07 +0000 (0:00:00.322) 0:00:21.063 ********* 2025-08-29 17:00:11.382413 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:11.382424 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:11.382435 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:11.382445 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:11.382456 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:11.382467 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:11.382477 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:11.382488 | orchestrator | 2025-08-29 17:00:11.382499 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 17:00:11.382510 | orchestrator | Friday 29 August 2025 17:00:08 +0000 (0:00:00.545) 0:00:21.608 ********* 2025-08-29 17:00:11.382521 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:00:11.382531 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:00:11.382542 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:00:11.382553 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:00:11.382564 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:00:11.382574 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:00:11.382585 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:00:11.382595 | orchestrator | 2025-08-29 17:00:11.382606 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 17:00:11.382617 | orchestrator | Friday 29 August 2025 17:00:08 +0000 (0:00:00.220) 0:00:21.828 ********* 2025-08-29 17:00:11.382628 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:11.382638 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:00:11.382649 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:11.382660 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:11.382671 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:00:11.382681 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:00:11.382692 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:11.382703 | orchestrator | 2025-08-29 17:00:11.382714 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 17:00:11.382729 | orchestrator | Friday 29 August 2025 17:00:09 +0000 (0:00:01.130) 0:00:22.958 ********* 2025-08-29 17:00:11.382741 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:11.382751 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:11.382762 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:11.382773 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:11.382784 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:11.382794 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:11.382805 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:11.382816 | orchestrator | 2025-08-29 17:00:11.382827 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 17:00:11.382838 | orchestrator | Friday 29 August 2025 17:00:10 +0000 (0:00:00.542) 0:00:23.501 ********* 2025-08-29 17:00:11.382849 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:11.382860 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:11.382870 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:11.382901 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:00:11.382921 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:51.714887 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:00:51.715006 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:00:51.715029 | orchestrator | 2025-08-29 17:00:51.715046 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 17:00:51.715065 | orchestrator | Friday 29 August 2025 17:00:11 +0000 (0:00:00.987) 0:00:24.489 ********* 2025-08-29 17:00:51.715080 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:51.715116 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:51.715126 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:51.715134 | orchestrator | changed: [testbed-manager] 2025-08-29 17:00:51.715143 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:00:51.715151 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:00:51.715160 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:00:51.715168 | orchestrator | 2025-08-29 17:00:51.715177 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-08-29 17:00:51.715186 | orchestrator | Friday 29 August 2025 17:00:28 +0000 (0:00:16.713) 0:00:41.203 ********* 2025-08-29 17:00:51.715195 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:51.715203 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:51.715212 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:51.715220 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:51.715229 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:51.715237 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:51.715246 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:51.715254 | orchestrator | 2025-08-29 17:00:51.715262 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-08-29 17:00:51.715271 | orchestrator | Friday 29 August 2025 17:00:28 +0000 (0:00:00.220) 0:00:41.423 ********* 2025-08-29 17:00:51.715280 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:51.715288 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:51.715296 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:51.715305 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:51.715313 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:51.715321 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:51.715330 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:51.715338 | orchestrator | 2025-08-29 17:00:51.715347 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-08-29 17:00:51.715355 | orchestrator | Friday 29 August 2025 17:00:28 +0000 (0:00:00.214) 0:00:41.638 ********* 2025-08-29 17:00:51.715364 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:51.715372 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:51.715380 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:51.715389 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:51.715397 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:51.715406 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:51.715414 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:51.715423 | orchestrator | 2025-08-29 17:00:51.715431 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-08-29 17:00:51.715440 | orchestrator | Friday 29 August 2025 17:00:28 +0000 (0:00:00.208) 0:00:41.847 ********* 2025-08-29 17:00:51.715450 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:00:51.715463 | orchestrator | 2025-08-29 17:00:51.715471 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-08-29 17:00:51.715480 | orchestrator | Friday 29 August 2025 17:00:29 +0000 (0:00:00.321) 0:00:42.168 ********* 2025-08-29 17:00:51.715488 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:51.715546 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:51.715555 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:51.715563 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:51.715572 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:51.715580 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:51.715589 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:51.715597 | orchestrator | 2025-08-29 17:00:51.715606 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-08-29 17:00:51.715615 | orchestrator | Friday 29 August 2025 17:00:30 +0000 (0:00:01.356) 0:00:43.525 ********* 2025-08-29 17:00:51.715623 | orchestrator | changed: [testbed-manager] 2025-08-29 17:00:51.715632 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:00:51.715640 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:00:51.715656 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:00:51.715665 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:00:51.715673 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:00:51.715682 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:00:51.715690 | orchestrator | 2025-08-29 17:00:51.715699 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-08-29 17:00:51.715707 | orchestrator | Friday 29 August 2025 17:00:31 +0000 (0:00:01.030) 0:00:44.555 ********* 2025-08-29 17:00:51.715716 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:51.715724 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:51.715733 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:51.715742 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:51.715751 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:51.715759 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:51.715768 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:51.715776 | orchestrator | 2025-08-29 17:00:51.715785 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-08-29 17:00:51.715794 | orchestrator | Friday 29 August 2025 17:00:32 +0000 (0:00:00.817) 0:00:45.372 ********* 2025-08-29 17:00:51.715803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:00:51.715814 | orchestrator | 2025-08-29 17:00:51.715822 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-08-29 17:00:51.715832 | orchestrator | Friday 29 August 2025 17:00:32 +0000 (0:00:00.312) 0:00:45.685 ********* 2025-08-29 17:00:51.715841 | orchestrator | changed: [testbed-manager] 2025-08-29 17:00:51.715849 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:00:51.715858 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:00:51.715867 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:00:51.715875 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:00:51.715884 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:00:51.715892 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:00:51.715901 | orchestrator | 2025-08-29 17:00:51.715925 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-08-29 17:00:51.715935 | orchestrator | Friday 29 August 2025 17:00:33 +0000 (0:00:01.018) 0:00:46.703 ********* 2025-08-29 17:00:51.715943 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:00:51.715952 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:00:51.715960 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:00:51.715989 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:00:51.716004 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:00:51.716019 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:00:51.716033 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:00:51.716045 | orchestrator | 2025-08-29 17:00:51.716056 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-08-29 17:00:51.716071 | orchestrator | Friday 29 August 2025 17:00:33 +0000 (0:00:00.292) 0:00:46.995 ********* 2025-08-29 17:00:51.716085 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:00:51.716099 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:00:51.716113 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:00:51.716127 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:00:51.716142 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:00:51.716156 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:00:51.716171 | orchestrator | changed: [testbed-manager] 2025-08-29 17:00:51.716186 | orchestrator | 2025-08-29 17:00:51.716201 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-08-29 17:00:51.716216 | orchestrator | Friday 29 August 2025 17:00:46 +0000 (0:00:12.459) 0:00:59.455 ********* 2025-08-29 17:00:51.716230 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:51.716245 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:51.716259 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:51.716272 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:51.716297 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:51.716310 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:51.716327 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:51.716341 | orchestrator | 2025-08-29 17:00:51.716357 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-08-29 17:00:51.716371 | orchestrator | Friday 29 August 2025 17:00:47 +0000 (0:00:01.323) 0:01:00.779 ********* 2025-08-29 17:00:51.716383 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:51.716397 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:51.716409 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:51.716422 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:51.716435 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:51.716451 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:51.716467 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:51.716484 | orchestrator | 2025-08-29 17:00:51.716500 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-08-29 17:00:51.716516 | orchestrator | Friday 29 August 2025 17:00:48 +0000 (0:00:00.888) 0:01:01.667 ********* 2025-08-29 17:00:51.716531 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:51.716546 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:51.716559 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:51.716574 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:51.716583 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:51.716591 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:51.716600 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:51.716608 | orchestrator | 2025-08-29 17:00:51.716631 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-08-29 17:00:51.716640 | orchestrator | Friday 29 August 2025 17:00:48 +0000 (0:00:00.226) 0:01:01.894 ********* 2025-08-29 17:00:51.716649 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:51.716658 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:51.716666 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:51.716675 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:51.716683 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:51.716691 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:51.716700 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:51.716708 | orchestrator | 2025-08-29 17:00:51.716717 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-08-29 17:00:51.716726 | orchestrator | Friday 29 August 2025 17:00:48 +0000 (0:00:00.221) 0:01:02.115 ********* 2025-08-29 17:00:51.716735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:00:51.716745 | orchestrator | 2025-08-29 17:00:51.716754 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-08-29 17:00:51.716763 | orchestrator | Friday 29 August 2025 17:00:49 +0000 (0:00:00.333) 0:01:02.449 ********* 2025-08-29 17:00:51.716771 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:51.716780 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:51.716788 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:51.716797 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:51.716805 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:51.716814 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:51.716822 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:51.716831 | orchestrator | 2025-08-29 17:00:51.716839 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-08-29 17:00:51.716848 | orchestrator | Friday 29 August 2025 17:00:50 +0000 (0:00:01.570) 0:01:04.020 ********* 2025-08-29 17:00:51.716856 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:00:51.716865 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:00:51.716874 | orchestrator | changed: [testbed-manager] 2025-08-29 17:00:51.716886 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:00:51.716895 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:00:51.716904 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:00:51.716920 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:00:51.716928 | orchestrator | 2025-08-29 17:00:51.716937 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-08-29 17:00:51.716946 | orchestrator | Friday 29 August 2025 17:00:51 +0000 (0:00:00.543) 0:01:04.563 ********* 2025-08-29 17:00:51.716954 | orchestrator | ok: [testbed-manager] 2025-08-29 17:00:51.716963 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:00:51.716999 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:00:51.717010 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:00:51.717020 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:00:51.717030 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:00:51.717039 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:00:51.717049 | orchestrator | 2025-08-29 17:00:51.717068 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-08-29 17:03:11.665351 | orchestrator | Friday 29 August 2025 17:00:51 +0000 (0:00:00.264) 0:01:04.827 ********* 2025-08-29 17:03:11.665496 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:11.665519 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:03:11.665539 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:03:11.665556 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:03:11.665573 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:03:11.665591 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:03:11.665610 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:03:11.665627 | orchestrator | 2025-08-29 17:03:11.665648 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-08-29 17:03:11.665666 | orchestrator | Friday 29 August 2025 17:00:52 +0000 (0:00:01.085) 0:01:05.913 ********* 2025-08-29 17:03:11.665684 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:03:11.665703 | orchestrator | changed: [testbed-manager] 2025-08-29 17:03:11.665720 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:03:11.665738 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:03:11.665756 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:03:11.665775 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:03:11.665793 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:03:11.665810 | orchestrator | 2025-08-29 17:03:11.665828 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-08-29 17:03:11.665846 | orchestrator | Friday 29 August 2025 17:00:54 +0000 (0:00:01.470) 0:01:07.384 ********* 2025-08-29 17:03:11.665865 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:03:11.665885 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:03:11.665904 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:11.665922 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:03:11.665941 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:03:11.665959 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:03:11.666009 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:03:11.666100 | orchestrator | 2025-08-29 17:03:11.666119 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-08-29 17:03:11.666137 | orchestrator | Friday 29 August 2025 17:00:56 +0000 (0:00:02.135) 0:01:09.519 ********* 2025-08-29 17:03:11.666155 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:11.666173 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:03:11.666192 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:03:11.666210 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:03:11.666226 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:03:11.666243 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:03:11.666259 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:03:11.666277 | orchestrator | 2025-08-29 17:03:11.666294 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-08-29 17:03:11.666311 | orchestrator | Friday 29 August 2025 17:01:34 +0000 (0:00:37.704) 0:01:47.223 ********* 2025-08-29 17:03:11.666328 | orchestrator | changed: [testbed-manager] 2025-08-29 17:03:11.666345 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:03:11.666363 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:03:11.666382 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:03:11.666400 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:03:11.666418 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:03:11.666470 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:03:11.666489 | orchestrator | 2025-08-29 17:03:11.666505 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-08-29 17:03:11.666522 | orchestrator | Friday 29 August 2025 17:02:51 +0000 (0:01:17.664) 0:03:04.888 ********* 2025-08-29 17:03:11.666538 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:11.666556 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:03:11.666575 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:03:11.666594 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:03:11.666614 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:03:11.666631 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:03:11.666646 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:03:11.666657 | orchestrator | 2025-08-29 17:03:11.666668 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-08-29 17:03:11.666680 | orchestrator | Friday 29 August 2025 17:02:53 +0000 (0:00:01.781) 0:03:06.670 ********* 2025-08-29 17:03:11.666691 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:03:11.666701 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:03:11.666712 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:03:11.666722 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:03:11.666733 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:03:11.666744 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:03:11.666754 | orchestrator | changed: [testbed-manager] 2025-08-29 17:03:11.666765 | orchestrator | 2025-08-29 17:03:11.666776 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-08-29 17:03:11.666786 | orchestrator | Friday 29 August 2025 17:03:05 +0000 (0:00:12.404) 0:03:19.074 ********* 2025-08-29 17:03:11.666800 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-08-29 17:03:11.666833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-08-29 17:03:11.666878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-08-29 17:03:11.666898 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-08-29 17:03:11.666909 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-08-29 17:03:11.666921 | orchestrator | 2025-08-29 17:03:11.666932 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-08-29 17:03:11.666954 | orchestrator | Friday 29 August 2025 17:03:06 +0000 (0:00:00.468) 0:03:19.543 ********* 2025-08-29 17:03:11.666965 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 17:03:11.667030 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:03:11.667042 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 17:03:11.667053 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 17:03:11.667064 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:03:11.667074 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:03:11.667085 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 17:03:11.667096 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:03:11.667107 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 17:03:11.667117 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 17:03:11.667128 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 17:03:11.667139 | orchestrator | 2025-08-29 17:03:11.667150 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-08-29 17:03:11.667161 | orchestrator | Friday 29 August 2025 17:03:07 +0000 (0:00:00.609) 0:03:20.153 ********* 2025-08-29 17:03:11.667171 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 17:03:11.667183 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 17:03:11.667194 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 17:03:11.667205 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 17:03:11.667215 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 17:03:11.667226 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 17:03:11.667237 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 17:03:11.667247 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 17:03:11.667258 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 17:03:11.667269 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 17:03:11.667279 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:03:11.667290 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 17:03:11.667301 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 17:03:11.667312 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 17:03:11.667322 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 17:03:11.667334 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 17:03:11.667344 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 17:03:11.667355 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 17:03:11.667366 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 17:03:11.667377 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 17:03:11.667388 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 17:03:11.667413 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 17:03:14.791653 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 17:03:14.791756 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:03:14.791772 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 17:03:14.791784 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 17:03:14.791796 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 17:03:14.791807 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 17:03:14.791818 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 17:03:14.791829 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 17:03:14.791840 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 17:03:14.791851 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 17:03:14.791862 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 17:03:14.791873 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:03:14.791883 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 17:03:14.791894 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 17:03:14.791905 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 17:03:14.791916 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 17:03:14.791927 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 17:03:14.791937 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 17:03:14.791948 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 17:03:14.791959 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 17:03:14.792008 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 17:03:14.792020 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:03:14.792031 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 17:03:14.792042 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 17:03:14.792053 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 17:03:14.792064 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 17:03:14.792075 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 17:03:14.792104 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 17:03:14.792116 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 17:03:14.792127 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 17:03:14.792137 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 17:03:14.792148 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 17:03:14.792179 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 17:03:14.792190 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 17:03:14.792201 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 17:03:14.792211 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 17:03:14.792226 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 17:03:14.792237 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 17:03:14.792248 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 17:03:14.792259 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 17:03:14.792269 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 17:03:14.792280 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 17:03:14.792291 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 17:03:14.792318 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 17:03:14.792330 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 17:03:14.792340 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 17:03:14.792351 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 17:03:14.792362 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 17:03:14.792373 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 17:03:14.792383 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 17:03:14.792394 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 17:03:14.792405 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 17:03:14.792415 | orchestrator | 2025-08-29 17:03:14.792426 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-08-29 17:03:14.792437 | orchestrator | Friday 29 August 2025 17:03:11 +0000 (0:00:04.622) 0:03:24.775 ********* 2025-08-29 17:03:14.792448 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 17:03:14.792458 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 17:03:14.792469 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 17:03:14.792479 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 17:03:14.792490 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 17:03:14.792501 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 17:03:14.792511 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 17:03:14.792522 | orchestrator | 2025-08-29 17:03:14.792533 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-08-29 17:03:14.792543 | orchestrator | Friday 29 August 2025 17:03:12 +0000 (0:00:00.584) 0:03:25.360 ********* 2025-08-29 17:03:14.792554 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 17:03:14.792565 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:03:14.792575 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 17:03:14.792593 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 17:03:14.792604 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:03:14.792614 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:03:14.792625 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 17:03:14.792636 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:03:14.792646 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 17:03:14.792657 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 17:03:14.792668 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 17:03:14.792679 | orchestrator | 2025-08-29 17:03:14.792690 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-08-29 17:03:14.792700 | orchestrator | Friday 29 August 2025 17:03:12 +0000 (0:00:00.606) 0:03:25.966 ********* 2025-08-29 17:03:14.792711 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 17:03:14.792722 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 17:03:14.792732 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:03:14.792743 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 17:03:14.792754 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:03:14.792764 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 17:03:14.792775 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:03:14.792786 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:03:14.792801 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 17:03:14.792812 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 17:03:14.792822 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 17:03:14.792833 | orchestrator | 2025-08-29 17:03:14.792844 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-08-29 17:03:14.792855 | orchestrator | Friday 29 August 2025 17:03:14 +0000 (0:00:01.635) 0:03:27.601 ********* 2025-08-29 17:03:14.792865 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:03:14.792876 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:03:14.792887 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:03:14.792897 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:03:14.792908 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:03:14.792925 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:03:26.044010 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:03:26.044106 | orchestrator | 2025-08-29 17:03:26.044121 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-08-29 17:03:26.044133 | orchestrator | Friday 29 August 2025 17:03:14 +0000 (0:00:00.304) 0:03:27.906 ********* 2025-08-29 17:03:26.044143 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:03:26.044153 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:03:26.044163 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:03:26.044172 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:03:26.044183 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:03:26.044192 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:26.044202 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:03:26.044212 | orchestrator | 2025-08-29 17:03:26.044221 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-08-29 17:03:26.044231 | orchestrator | Friday 29 August 2025 17:03:20 +0000 (0:00:05.770) 0:03:33.677 ********* 2025-08-29 17:03:26.044241 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-08-29 17:03:26.044271 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-08-29 17:03:26.044282 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:03:26.044292 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-08-29 17:03:26.044301 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:03:26.044311 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-08-29 17:03:26.044320 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:03:26.044330 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:03:26.044339 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-08-29 17:03:26.044349 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:03:26.044359 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-08-29 17:03:26.044368 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:03:26.044378 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-08-29 17:03:26.044387 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:03:26.044400 | orchestrator | 2025-08-29 17:03:26.044410 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-08-29 17:03:26.044419 | orchestrator | Friday 29 August 2025 17:03:20 +0000 (0:00:00.331) 0:03:34.009 ********* 2025-08-29 17:03:26.044429 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-08-29 17:03:26.044438 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-08-29 17:03:26.044448 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-08-29 17:03:26.044457 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-08-29 17:03:26.044467 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-08-29 17:03:26.044476 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-08-29 17:03:26.044485 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-08-29 17:03:26.044495 | orchestrator | 2025-08-29 17:03:26.044504 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-08-29 17:03:26.044514 | orchestrator | Friday 29 August 2025 17:03:21 +0000 (0:00:00.956) 0:03:34.965 ********* 2025-08-29 17:03:26.044524 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:03:26.044537 | orchestrator | 2025-08-29 17:03:26.044548 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-08-29 17:03:26.044560 | orchestrator | Friday 29 August 2025 17:03:22 +0000 (0:00:00.507) 0:03:35.472 ********* 2025-08-29 17:03:26.044571 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:03:26.044582 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:03:26.044593 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:26.044604 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:03:26.044615 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:03:26.044626 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:03:26.044637 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:03:26.044647 | orchestrator | 2025-08-29 17:03:26.044658 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-08-29 17:03:26.044670 | orchestrator | Friday 29 August 2025 17:03:23 +0000 (0:00:01.075) 0:03:36.547 ********* 2025-08-29 17:03:26.044681 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:03:26.044692 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:26.044703 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:03:26.044713 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:03:26.044725 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:03:26.044736 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:03:26.044747 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:03:26.044758 | orchestrator | 2025-08-29 17:03:26.044769 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-08-29 17:03:26.044780 | orchestrator | Friday 29 August 2025 17:03:24 +0000 (0:00:00.579) 0:03:37.126 ********* 2025-08-29 17:03:26.044791 | orchestrator | changed: [testbed-manager] 2025-08-29 17:03:26.044802 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:03:26.044813 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:03:26.044832 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:03:26.044843 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:03:26.044855 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:03:26.044865 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:03:26.044877 | orchestrator | 2025-08-29 17:03:26.044888 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-08-29 17:03:26.044909 | orchestrator | Friday 29 August 2025 17:03:24 +0000 (0:00:00.550) 0:03:37.676 ********* 2025-08-29 17:03:26.044919 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:03:26.044928 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:03:26.044938 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:26.044947 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:03:26.044957 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:03:26.044966 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:03:26.044992 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:03:26.045002 | orchestrator | 2025-08-29 17:03:26.045012 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-08-29 17:03:26.045022 | orchestrator | Friday 29 August 2025 17:03:25 +0000 (0:00:00.566) 0:03:38.243 ********* 2025-08-29 17:03:26.045050 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756485546.521281, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:03:26.045064 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756485577.958187, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:03:26.045074 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756485578.571837, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:03:26.045084 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756485585.58941, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:03:26.045094 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756485580.7745724, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:03:26.045110 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756485586.5084713, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:03:26.045121 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756485572.2651255, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:03:26.045146 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:03:43.521863 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:03:43.521970 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:03:43.522095 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:03:43.522113 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:03:43.522147 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:03:43.522166 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:03:43.522179 | orchestrator | 2025-08-29 17:03:43.522192 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-08-29 17:03:43.522204 | orchestrator | Friday 29 August 2025 17:03:26 +0000 (0:00:00.903) 0:03:39.147 ********* 2025-08-29 17:03:43.522215 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:03:43.522227 | orchestrator | changed: [testbed-manager] 2025-08-29 17:03:43.522237 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:03:43.522247 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:03:43.522258 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:03:43.522269 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:03:43.522279 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:03:43.522290 | orchestrator | 2025-08-29 17:03:43.522301 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-08-29 17:03:43.522311 | orchestrator | Friday 29 August 2025 17:03:27 +0000 (0:00:01.110) 0:03:40.257 ********* 2025-08-29 17:03:43.522322 | orchestrator | changed: [testbed-manager] 2025-08-29 17:03:43.522333 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:03:43.522343 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:03:43.522353 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:03:43.522383 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:03:43.522397 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:03:43.522409 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:03:43.522421 | orchestrator | 2025-08-29 17:03:43.522433 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-08-29 17:03:43.522446 | orchestrator | Friday 29 August 2025 17:03:28 +0000 (0:00:01.110) 0:03:41.367 ********* 2025-08-29 17:03:43.522458 | orchestrator | changed: [testbed-manager] 2025-08-29 17:03:43.522470 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:03:43.522482 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:03:43.522494 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:03:43.522506 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:03:43.522519 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:03:43.522530 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:03:43.522542 | orchestrator | 2025-08-29 17:03:43.522555 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-08-29 17:03:43.522567 | orchestrator | Friday 29 August 2025 17:03:29 +0000 (0:00:01.121) 0:03:42.489 ********* 2025-08-29 17:03:43.522579 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:03:43.522591 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:03:43.522603 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:03:43.522615 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:03:43.522627 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:03:43.522639 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:03:43.522652 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:03:43.522671 | orchestrator | 2025-08-29 17:03:43.522683 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-08-29 17:03:43.522696 | orchestrator | Friday 29 August 2025 17:03:29 +0000 (0:00:00.273) 0:03:42.762 ********* 2025-08-29 17:03:43.522707 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:43.522720 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:03:43.522733 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:03:43.522745 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:03:43.522756 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:03:43.522766 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:03:43.522777 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:03:43.522788 | orchestrator | 2025-08-29 17:03:43.522798 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-08-29 17:03:43.522809 | orchestrator | Friday 29 August 2025 17:03:30 +0000 (0:00:00.726) 0:03:43.489 ********* 2025-08-29 17:03:43.522822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:03:43.522834 | orchestrator | 2025-08-29 17:03:43.522845 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-08-29 17:03:43.522856 | orchestrator | Friday 29 August 2025 17:03:30 +0000 (0:00:00.468) 0:03:43.958 ********* 2025-08-29 17:03:43.522867 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:43.522878 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:03:43.522888 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:03:43.522899 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:03:43.522909 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:03:43.522920 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:03:43.522931 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:03:43.522941 | orchestrator | 2025-08-29 17:03:43.522952 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-08-29 17:03:43.522963 | orchestrator | Friday 29 August 2025 17:03:39 +0000 (0:00:08.223) 0:03:52.181 ********* 2025-08-29 17:03:43.522994 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:43.523006 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:03:43.523016 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:03:43.523027 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:03:43.523037 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:03:43.523048 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:03:43.523058 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:03:43.523069 | orchestrator | 2025-08-29 17:03:43.523080 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-08-29 17:03:43.523091 | orchestrator | Friday 29 August 2025 17:03:40 +0000 (0:00:01.208) 0:03:53.389 ********* 2025-08-29 17:03:43.523102 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:43.523113 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:03:43.523123 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:03:43.523134 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:03:43.523144 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:03:43.523155 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:03:43.523165 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:03:43.523176 | orchestrator | 2025-08-29 17:03:43.523187 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-08-29 17:03:43.523198 | orchestrator | Friday 29 August 2025 17:03:42 +0000 (0:00:02.183) 0:03:55.572 ********* 2025-08-29 17:03:43.523208 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:43.523219 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:03:43.523229 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:03:43.523240 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:03:43.523255 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:03:43.523266 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:03:43.523277 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:03:43.523287 | orchestrator | 2025-08-29 17:03:43.523298 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-08-29 17:03:43.523317 | orchestrator | Friday 29 August 2025 17:03:42 +0000 (0:00:00.335) 0:03:55.908 ********* 2025-08-29 17:03:43.523327 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:43.523338 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:03:43.523348 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:03:43.523359 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:03:43.523369 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:03:43.523380 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:03:43.523390 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:03:43.523401 | orchestrator | 2025-08-29 17:03:43.523411 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-08-29 17:03:43.523422 | orchestrator | Friday 29 August 2025 17:03:43 +0000 (0:00:00.436) 0:03:56.344 ********* 2025-08-29 17:03:43.523433 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:43.523443 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:03:43.523454 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:03:43.523464 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:03:43.523475 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:03:43.523492 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:04:53.203582 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:04:53.203692 | orchestrator | 2025-08-29 17:04:53.203710 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-08-29 17:04:53.203723 | orchestrator | Friday 29 August 2025 17:03:43 +0000 (0:00:00.287) 0:03:56.632 ********* 2025-08-29 17:04:53.203735 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:04:53.203746 | orchestrator | ok: [testbed-manager] 2025-08-29 17:04:53.203757 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:04:53.203768 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:04:53.203779 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:04:53.203790 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:04:53.203801 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:04:53.203811 | orchestrator | 2025-08-29 17:04:53.203823 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-08-29 17:04:53.203834 | orchestrator | Friday 29 August 2025 17:03:49 +0000 (0:00:05.544) 0:04:02.176 ********* 2025-08-29 17:04:53.203846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:04:53.203860 | orchestrator | 2025-08-29 17:04:53.203872 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-08-29 17:04:53.203883 | orchestrator | Friday 29 August 2025 17:03:49 +0000 (0:00:00.414) 0:04:02.590 ********* 2025-08-29 17:04:53.203894 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-08-29 17:04:53.203905 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-08-29 17:04:53.203916 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-08-29 17:04:53.203927 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-08-29 17:04:53.203938 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:04:53.203949 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-08-29 17:04:53.203960 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:04:53.203971 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-08-29 17:04:53.204008 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-08-29 17:04:53.204021 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-08-29 17:04:53.204040 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:04:53.204058 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-08-29 17:04:53.204074 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-08-29 17:04:53.204091 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:04:53.204109 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-08-29 17:04:53.204126 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-08-29 17:04:53.204143 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:04:53.204194 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:04:53.204214 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-08-29 17:04:53.204233 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-08-29 17:04:53.204251 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:04:53.204265 | orchestrator | 2025-08-29 17:04:53.204277 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-08-29 17:04:53.204290 | orchestrator | Friday 29 August 2025 17:03:49 +0000 (0:00:00.338) 0:04:02.929 ********* 2025-08-29 17:04:53.204303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:04:53.204315 | orchestrator | 2025-08-29 17:04:53.204327 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-08-29 17:04:53.204341 | orchestrator | Friday 29 August 2025 17:03:50 +0000 (0:00:00.404) 0:04:03.334 ********* 2025-08-29 17:04:53.204353 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-08-29 17:04:53.204366 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:04:53.204378 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-08-29 17:04:53.204391 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-08-29 17:04:53.204403 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:04:53.204415 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:04:53.204427 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-08-29 17:04:53.204440 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-08-29 17:04:53.204453 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:04:53.204480 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:04:53.204493 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-08-29 17:04:53.204504 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:04:53.204515 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-08-29 17:04:53.204526 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:04:53.204536 | orchestrator | 2025-08-29 17:04:53.204547 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-08-29 17:04:53.204558 | orchestrator | Friday 29 August 2025 17:03:50 +0000 (0:00:00.326) 0:04:03.661 ********* 2025-08-29 17:04:53.204569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:04:53.204580 | orchestrator | 2025-08-29 17:04:53.204590 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-08-29 17:04:53.204601 | orchestrator | Friday 29 August 2025 17:03:50 +0000 (0:00:00.419) 0:04:04.080 ********* 2025-08-29 17:04:53.204612 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:04:53.204641 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:04:53.204652 | orchestrator | changed: [testbed-manager] 2025-08-29 17:04:53.204663 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:04:53.204674 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:04:53.204684 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:04:53.204695 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:04:53.204706 | orchestrator | 2025-08-29 17:04:53.204717 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-08-29 17:04:53.204728 | orchestrator | Friday 29 August 2025 17:04:25 +0000 (0:00:34.507) 0:04:38.588 ********* 2025-08-29 17:04:53.204739 | orchestrator | changed: [testbed-manager] 2025-08-29 17:04:53.204749 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:04:53.204760 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:04:53.204771 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:04:53.204781 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:04:53.204804 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:04:53.204815 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:04:53.204826 | orchestrator | 2025-08-29 17:04:53.204837 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-08-29 17:04:53.204848 | orchestrator | Friday 29 August 2025 17:04:33 +0000 (0:00:08.328) 0:04:46.917 ********* 2025-08-29 17:04:53.204858 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:04:53.204869 | orchestrator | changed: [testbed-manager] 2025-08-29 17:04:53.204880 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:04:53.204890 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:04:53.204901 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:04:53.204912 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:04:53.204922 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:04:53.204933 | orchestrator | 2025-08-29 17:04:53.204944 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-08-29 17:04:53.204955 | orchestrator | Friday 29 August 2025 17:04:41 +0000 (0:00:07.564) 0:04:54.482 ********* 2025-08-29 17:04:53.204966 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:04:53.204999 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:04:53.205010 | orchestrator | ok: [testbed-manager] 2025-08-29 17:04:53.205021 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:04:53.205032 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:04:53.205042 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:04:53.205053 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:04:53.205063 | orchestrator | 2025-08-29 17:04:53.205074 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-08-29 17:04:53.205086 | orchestrator | Friday 29 August 2025 17:04:43 +0000 (0:00:01.742) 0:04:56.224 ********* 2025-08-29 17:04:53.205096 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:04:53.205107 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:04:53.205118 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:04:53.205129 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:04:53.205139 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:04:53.205150 | orchestrator | changed: [testbed-manager] 2025-08-29 17:04:53.205160 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:04:53.205171 | orchestrator | 2025-08-29 17:04:53.205182 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-08-29 17:04:53.205192 | orchestrator | Friday 29 August 2025 17:04:49 +0000 (0:00:06.031) 0:05:02.256 ********* 2025-08-29 17:04:53.205204 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:04:53.205216 | orchestrator | 2025-08-29 17:04:53.205227 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-08-29 17:04:53.205238 | orchestrator | Friday 29 August 2025 17:04:49 +0000 (0:00:00.561) 0:05:02.817 ********* 2025-08-29 17:04:53.205249 | orchestrator | changed: [testbed-manager] 2025-08-29 17:04:53.205259 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:04:53.205270 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:04:53.205280 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:04:53.205291 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:04:53.205302 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:04:53.205312 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:04:53.205323 | orchestrator | 2025-08-29 17:04:53.205333 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-08-29 17:04:53.205344 | orchestrator | Friday 29 August 2025 17:04:50 +0000 (0:00:00.731) 0:05:03.549 ********* 2025-08-29 17:04:53.205355 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:04:53.205366 | orchestrator | ok: [testbed-manager] 2025-08-29 17:04:53.205376 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:04:53.205387 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:04:53.205398 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:04:53.205408 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:04:53.205426 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:04:53.205437 | orchestrator | 2025-08-29 17:04:53.205448 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-08-29 17:04:53.205459 | orchestrator | Friday 29 August 2025 17:04:52 +0000 (0:00:01.640) 0:05:05.189 ********* 2025-08-29 17:04:53.205470 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:04:53.205480 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:04:53.205491 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:04:53.205502 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:04:53.205512 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:04:53.205523 | orchestrator | changed: [testbed-manager] 2025-08-29 17:04:53.205533 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:04:53.205544 | orchestrator | 2025-08-29 17:04:53.205555 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-08-29 17:04:53.205566 | orchestrator | Friday 29 August 2025 17:04:52 +0000 (0:00:00.807) 0:05:05.996 ********* 2025-08-29 17:04:53.205576 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:04:53.205587 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:04:53.205598 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:04:53.205608 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:04:53.205619 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:04:53.205629 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:04:53.205640 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:04:53.205651 | orchestrator | 2025-08-29 17:04:53.205662 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-08-29 17:04:53.205679 | orchestrator | Friday 29 August 2025 17:04:53 +0000 (0:00:00.316) 0:05:06.313 ********* 2025-08-29 17:05:20.228670 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:05:20.228785 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:05:20.228801 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:05:20.228813 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:05:20.228824 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:05:20.228835 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:05:20.228846 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:05:20.228859 | orchestrator | 2025-08-29 17:05:20.228871 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-08-29 17:05:20.228902 | orchestrator | Friday 29 August 2025 17:04:53 +0000 (0:00:00.430) 0:05:06.744 ********* 2025-08-29 17:05:20.228914 | orchestrator | ok: [testbed-manager] 2025-08-29 17:05:20.228926 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:05:20.228937 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:05:20.228947 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:05:20.228958 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:05:20.228969 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:05:20.229008 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:05:20.229019 | orchestrator | 2025-08-29 17:05:20.229031 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-08-29 17:05:20.229042 | orchestrator | Friday 29 August 2025 17:04:53 +0000 (0:00:00.304) 0:05:07.048 ********* 2025-08-29 17:05:20.229053 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:05:20.229064 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:05:20.229075 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:05:20.229086 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:05:20.229098 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:05:20.229109 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:05:20.229120 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:05:20.229131 | orchestrator | 2025-08-29 17:05:20.229142 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-08-29 17:05:20.229154 | orchestrator | Friday 29 August 2025 17:04:54 +0000 (0:00:00.305) 0:05:07.354 ********* 2025-08-29 17:05:20.229165 | orchestrator | ok: [testbed-manager] 2025-08-29 17:05:20.229176 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:05:20.229187 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:05:20.229198 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:05:20.229231 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:05:20.229243 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:05:20.229253 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:05:20.229264 | orchestrator | 2025-08-29 17:05:20.229275 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-08-29 17:05:20.229286 | orchestrator | Friday 29 August 2025 17:04:54 +0000 (0:00:00.345) 0:05:07.700 ********* 2025-08-29 17:05:20.229297 | orchestrator | ok: [testbed-manager] =>  2025-08-29 17:05:20.229307 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 17:05:20.229318 | orchestrator | ok: [testbed-node-0] =>  2025-08-29 17:05:20.229329 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 17:05:20.229340 | orchestrator | ok: [testbed-node-1] =>  2025-08-29 17:05:20.229350 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 17:05:20.229361 | orchestrator | ok: [testbed-node-2] =>  2025-08-29 17:05:20.229371 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 17:05:20.229382 | orchestrator | ok: [testbed-node-3] =>  2025-08-29 17:05:20.229393 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 17:05:20.229404 | orchestrator | ok: [testbed-node-4] =>  2025-08-29 17:05:20.229415 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 17:05:20.229425 | orchestrator | ok: [testbed-node-5] =>  2025-08-29 17:05:20.229436 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 17:05:20.229447 | orchestrator | 2025-08-29 17:05:20.229458 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-08-29 17:05:20.229469 | orchestrator | Friday 29 August 2025 17:04:54 +0000 (0:00:00.333) 0:05:08.034 ********* 2025-08-29 17:05:20.229480 | orchestrator | ok: [testbed-manager] =>  2025-08-29 17:05:20.229491 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 17:05:20.229501 | orchestrator | ok: [testbed-node-0] =>  2025-08-29 17:05:20.229512 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 17:05:20.229522 | orchestrator | ok: [testbed-node-1] =>  2025-08-29 17:05:20.229533 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 17:05:20.229544 | orchestrator | ok: [testbed-node-2] =>  2025-08-29 17:05:20.229554 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 17:05:20.229565 | orchestrator | ok: [testbed-node-3] =>  2025-08-29 17:05:20.229575 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 17:05:20.229586 | orchestrator | ok: [testbed-node-4] =>  2025-08-29 17:05:20.229597 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 17:05:20.229607 | orchestrator | ok: [testbed-node-5] =>  2025-08-29 17:05:20.229618 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 17:05:20.229629 | orchestrator | 2025-08-29 17:05:20.229640 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-08-29 17:05:20.229651 | orchestrator | Friday 29 August 2025 17:04:55 +0000 (0:00:00.322) 0:05:08.356 ********* 2025-08-29 17:05:20.229661 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:05:20.229672 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:05:20.229683 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:05:20.229694 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:05:20.229704 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:05:20.229720 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:05:20.229732 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:05:20.229742 | orchestrator | 2025-08-29 17:05:20.229753 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-08-29 17:05:20.229764 | orchestrator | Friday 29 August 2025 17:04:55 +0000 (0:00:00.284) 0:05:08.641 ********* 2025-08-29 17:05:20.229775 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:05:20.229786 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:05:20.229796 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:05:20.229807 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:05:20.229818 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:05:20.229829 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:05:20.229840 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:05:20.229850 | orchestrator | 2025-08-29 17:05:20.229861 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-08-29 17:05:20.229880 | orchestrator | Friday 29 August 2025 17:04:55 +0000 (0:00:00.266) 0:05:08.907 ********* 2025-08-29 17:05:20.229908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:05:20.229923 | orchestrator | 2025-08-29 17:05:20.229934 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-08-29 17:05:20.229945 | orchestrator | Friday 29 August 2025 17:04:56 +0000 (0:00:00.446) 0:05:09.353 ********* 2025-08-29 17:05:20.229956 | orchestrator | ok: [testbed-manager] 2025-08-29 17:05:20.229967 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:05:20.229998 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:05:20.230009 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:05:20.230076 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:05:20.230088 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:05:20.230099 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:05:20.230110 | orchestrator | 2025-08-29 17:05:20.230121 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-08-29 17:05:20.230132 | orchestrator | Friday 29 August 2025 17:04:57 +0000 (0:00:00.896) 0:05:10.249 ********* 2025-08-29 17:05:20.230143 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:05:20.230154 | orchestrator | ok: [testbed-manager] 2025-08-29 17:05:20.230164 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:05:20.230175 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:05:20.230186 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:05:20.230196 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:05:20.230207 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:05:20.230218 | orchestrator | 2025-08-29 17:05:20.230229 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-08-29 17:05:20.230241 | orchestrator | Friday 29 August 2025 17:05:00 +0000 (0:00:03.494) 0:05:13.744 ********* 2025-08-29 17:05:20.230251 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-08-29 17:05:20.230263 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-08-29 17:05:20.230274 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-08-29 17:05:20.230285 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-08-29 17:05:20.230296 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-08-29 17:05:20.230307 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-08-29 17:05:20.230317 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:05:20.230328 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-08-29 17:05:20.230339 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-08-29 17:05:20.230350 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-08-29 17:05:20.230361 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:05:20.230371 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-08-29 17:05:20.230382 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-08-29 17:05:20.230393 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-08-29 17:05:20.230404 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:05:20.230414 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-08-29 17:05:20.230425 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-08-29 17:05:20.230436 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-08-29 17:05:20.230447 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:05:20.230458 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-08-29 17:05:20.230469 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-08-29 17:05:20.230480 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-08-29 17:05:20.230491 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:05:20.230501 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:05:20.230520 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-08-29 17:05:20.230531 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-08-29 17:05:20.230542 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-08-29 17:05:20.230553 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:05:20.230564 | orchestrator | 2025-08-29 17:05:20.230575 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-08-29 17:05:20.230586 | orchestrator | Friday 29 August 2025 17:05:01 +0000 (0:00:00.630) 0:05:14.375 ********* 2025-08-29 17:05:20.230596 | orchestrator | ok: [testbed-manager] 2025-08-29 17:05:20.230607 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:05:20.230618 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:05:20.230629 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:05:20.230640 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:05:20.230651 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:05:20.230661 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:05:20.230672 | orchestrator | 2025-08-29 17:05:20.230683 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-08-29 17:05:20.230694 | orchestrator | Friday 29 August 2025 17:05:07 +0000 (0:00:06.398) 0:05:20.773 ********* 2025-08-29 17:05:20.230705 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:05:20.230716 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:05:20.230732 | orchestrator | ok: [testbed-manager] 2025-08-29 17:05:20.230743 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:05:20.230754 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:05:20.230765 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:05:20.230775 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:05:20.230786 | orchestrator | 2025-08-29 17:05:20.230797 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-08-29 17:05:20.230807 | orchestrator | Friday 29 August 2025 17:05:08 +0000 (0:00:01.265) 0:05:22.038 ********* 2025-08-29 17:05:20.230818 | orchestrator | ok: [testbed-manager] 2025-08-29 17:05:20.230829 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:05:20.230840 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:05:20.230851 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:05:20.230861 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:05:20.230872 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:05:20.230882 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:05:20.230893 | orchestrator | 2025-08-29 17:05:20.230904 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-08-29 17:05:20.230915 | orchestrator | Friday 29 August 2025 17:05:16 +0000 (0:00:07.548) 0:05:29.586 ********* 2025-08-29 17:05:20.230926 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:05:20.230937 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:05:20.230948 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:05:20.230966 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:02.626766 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:02.626886 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:02.626901 | orchestrator | changed: [testbed-manager] 2025-08-29 17:06:02.626913 | orchestrator | 2025-08-29 17:06:02.626925 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-08-29 17:06:02.626939 | orchestrator | Friday 29 August 2025 17:05:20 +0000 (0:00:03.750) 0:05:33.337 ********* 2025-08-29 17:06:02.626950 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:02.626962 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:02.627000 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:02.627012 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:02.627023 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:02.627034 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:02.627045 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:02.627056 | orchestrator | 2025-08-29 17:06:02.627067 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-08-29 17:06:02.627078 | orchestrator | Friday 29 August 2025 17:05:21 +0000 (0:00:01.365) 0:05:34.703 ********* 2025-08-29 17:06:02.627112 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:02.627123 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:02.627134 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:02.627145 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:02.627156 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:02.627166 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:02.627177 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:02.627188 | orchestrator | 2025-08-29 17:06:02.627199 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-08-29 17:06:02.627210 | orchestrator | Friday 29 August 2025 17:05:22 +0000 (0:00:01.308) 0:05:36.012 ********* 2025-08-29 17:06:02.627221 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:06:02.627231 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:06:02.627242 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:06:02.627253 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:06:02.627264 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:06:02.627274 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:06:02.627285 | orchestrator | changed: [testbed-manager] 2025-08-29 17:06:02.627296 | orchestrator | 2025-08-29 17:06:02.627309 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-08-29 17:06:02.627322 | orchestrator | Friday 29 August 2025 17:05:23 +0000 (0:00:00.840) 0:05:36.853 ********* 2025-08-29 17:06:02.627335 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:02.627347 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:02.627360 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:02.627372 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:02.627385 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:02.627397 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:02.627409 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:02.627422 | orchestrator | 2025-08-29 17:06:02.627434 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-08-29 17:06:02.627448 | orchestrator | Friday 29 August 2025 17:05:32 +0000 (0:00:09.161) 0:05:46.014 ********* 2025-08-29 17:06:02.627460 | orchestrator | changed: [testbed-manager] 2025-08-29 17:06:02.627472 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:02.627484 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:02.627497 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:02.627509 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:02.627522 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:02.627534 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:02.627546 | orchestrator | 2025-08-29 17:06:02.627559 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-08-29 17:06:02.627571 | orchestrator | Friday 29 August 2025 17:05:33 +0000 (0:00:00.975) 0:05:46.989 ********* 2025-08-29 17:06:02.627583 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:02.627595 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:02.627607 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:02.627620 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:02.627633 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:02.627645 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:02.627657 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:02.627669 | orchestrator | 2025-08-29 17:06:02.627682 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-08-29 17:06:02.627693 | orchestrator | Friday 29 August 2025 17:05:42 +0000 (0:00:08.416) 0:05:55.406 ********* 2025-08-29 17:06:02.627704 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:02.627715 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:02.627726 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:02.627736 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:02.627747 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:02.627758 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:02.627768 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:02.627779 | orchestrator | 2025-08-29 17:06:02.627797 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-08-29 17:06:02.627823 | orchestrator | Friday 29 August 2025 17:05:52 +0000 (0:00:10.513) 0:06:05.919 ********* 2025-08-29 17:06:02.627834 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-08-29 17:06:02.627845 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-08-29 17:06:02.627856 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-08-29 17:06:02.627867 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-08-29 17:06:02.627877 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-08-29 17:06:02.627888 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-08-29 17:06:02.627899 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-08-29 17:06:02.627910 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-08-29 17:06:02.627921 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-08-29 17:06:02.627932 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-08-29 17:06:02.627942 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-08-29 17:06:02.627953 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-08-29 17:06:02.627964 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-08-29 17:06:02.628006 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-08-29 17:06:02.628017 | orchestrator | 2025-08-29 17:06:02.628028 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-08-29 17:06:02.628056 | orchestrator | Friday 29 August 2025 17:05:53 +0000 (0:00:01.179) 0:06:07.099 ********* 2025-08-29 17:06:02.628068 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:06:02.628079 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:06:02.628089 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:06:02.628100 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:06:02.628111 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:06:02.628121 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:06:02.628132 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:06:02.628143 | orchestrator | 2025-08-29 17:06:02.628153 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-08-29 17:06:02.628164 | orchestrator | Friday 29 August 2025 17:05:54 +0000 (0:00:00.549) 0:06:07.648 ********* 2025-08-29 17:06:02.628175 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:02.628186 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:02.628196 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:02.628207 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:02.628217 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:02.628228 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:02.628238 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:02.628248 | orchestrator | 2025-08-29 17:06:02.628259 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-08-29 17:06:02.628271 | orchestrator | Friday 29 August 2025 17:05:58 +0000 (0:00:03.552) 0:06:11.201 ********* 2025-08-29 17:06:02.628282 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:06:02.628292 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:06:02.628303 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:06:02.628314 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:06:02.628324 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:06:02.628335 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:06:02.628345 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:06:02.628356 | orchestrator | 2025-08-29 17:06:02.628367 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-08-29 17:06:02.628379 | orchestrator | Friday 29 August 2025 17:05:58 +0000 (0:00:00.527) 0:06:11.728 ********* 2025-08-29 17:06:02.628389 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-08-29 17:06:02.628400 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-08-29 17:06:02.628411 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:06:02.628429 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-08-29 17:06:02.628440 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-08-29 17:06:02.628451 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:06:02.628462 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-08-29 17:06:02.628472 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-08-29 17:06:02.628483 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:06:02.628493 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-08-29 17:06:02.628504 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-08-29 17:06:02.628514 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:06:02.628525 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-08-29 17:06:02.628536 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-08-29 17:06:02.628546 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:06:02.628557 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-08-29 17:06:02.628568 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-08-29 17:06:02.628578 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:06:02.628589 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-08-29 17:06:02.628599 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-08-29 17:06:02.628610 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:06:02.628620 | orchestrator | 2025-08-29 17:06:02.628631 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-08-29 17:06:02.628642 | orchestrator | Friday 29 August 2025 17:05:59 +0000 (0:00:00.773) 0:06:12.502 ********* 2025-08-29 17:06:02.628652 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:06:02.628663 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:06:02.628674 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:06:02.628684 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:06:02.628695 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:06:02.628705 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:06:02.628716 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:06:02.628727 | orchestrator | 2025-08-29 17:06:02.628737 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-08-29 17:06:02.628748 | orchestrator | Friday 29 August 2025 17:05:59 +0000 (0:00:00.551) 0:06:13.054 ********* 2025-08-29 17:06:02.628759 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:06:02.628770 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:06:02.628780 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:06:02.628791 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:06:02.628801 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:06:02.628812 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:06:02.628823 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:06:02.628833 | orchestrator | 2025-08-29 17:06:02.628844 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-08-29 17:06:02.628855 | orchestrator | Friday 29 August 2025 17:06:00 +0000 (0:00:00.553) 0:06:13.608 ********* 2025-08-29 17:06:02.628865 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:06:02.628876 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:06:02.628886 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:06:02.628897 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:06:02.628907 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:06:02.628918 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:06:02.628928 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:06:02.628939 | orchestrator | 2025-08-29 17:06:02.628950 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-08-29 17:06:02.628961 | orchestrator | Friday 29 August 2025 17:06:01 +0000 (0:00:00.533) 0:06:14.141 ********* 2025-08-29 17:06:02.628990 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:02.629008 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:06:24.517070 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:06:24.517177 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:06:24.517191 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:06:24.517202 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:06:24.517212 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:06:24.517223 | orchestrator | 2025-08-29 17:06:24.517235 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-08-29 17:06:24.517246 | orchestrator | Friday 29 August 2025 17:06:02 +0000 (0:00:01.594) 0:06:15.735 ********* 2025-08-29 17:06:24.517257 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:06:24.517269 | orchestrator | 2025-08-29 17:06:24.517279 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-08-29 17:06:24.517289 | orchestrator | Friday 29 August 2025 17:06:03 +0000 (0:00:01.110) 0:06:16.845 ********* 2025-08-29 17:06:24.517299 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:24.517308 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:24.517319 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:24.517329 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:24.517338 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:24.517348 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:24.517358 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:24.517367 | orchestrator | 2025-08-29 17:06:24.517377 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-08-29 17:06:24.517387 | orchestrator | Friday 29 August 2025 17:06:04 +0000 (0:00:00.846) 0:06:17.692 ********* 2025-08-29 17:06:24.517396 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:24.517406 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:24.517416 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:24.517425 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:24.517435 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:24.517445 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:24.517454 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:24.517465 | orchestrator | 2025-08-29 17:06:24.517475 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-08-29 17:06:24.517484 | orchestrator | Friday 29 August 2025 17:06:05 +0000 (0:00:00.876) 0:06:18.569 ********* 2025-08-29 17:06:24.517494 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:24.517504 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:24.517513 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:24.517523 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:24.517532 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:24.517542 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:24.517551 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:24.517561 | orchestrator | 2025-08-29 17:06:24.517571 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-08-29 17:06:24.517581 | orchestrator | Friday 29 August 2025 17:06:06 +0000 (0:00:01.330) 0:06:19.900 ********* 2025-08-29 17:06:24.517591 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:06:24.517602 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:06:24.517614 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:06:24.517625 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:06:24.517636 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:06:24.517647 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:06:24.517657 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:06:24.517668 | orchestrator | 2025-08-29 17:06:24.517696 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-08-29 17:06:24.517708 | orchestrator | Friday 29 August 2025 17:06:08 +0000 (0:00:01.572) 0:06:21.472 ********* 2025-08-29 17:06:24.517719 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:24.517730 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:24.517742 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:24.517772 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:24.517784 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:24.517796 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:24.517807 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:24.517818 | orchestrator | 2025-08-29 17:06:24.517830 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-08-29 17:06:24.517841 | orchestrator | Friday 29 August 2025 17:06:09 +0000 (0:00:01.318) 0:06:22.790 ********* 2025-08-29 17:06:24.517852 | orchestrator | changed: [testbed-manager] 2025-08-29 17:06:24.517862 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:24.517873 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:24.517884 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:24.517894 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:24.517904 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:24.517915 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:24.517926 | orchestrator | 2025-08-29 17:06:24.517936 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-08-29 17:06:24.517952 | orchestrator | Friday 29 August 2025 17:06:11 +0000 (0:00:01.429) 0:06:24.220 ********* 2025-08-29 17:06:24.517963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:06:24.517994 | orchestrator | 2025-08-29 17:06:24.518005 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-08-29 17:06:24.518052 | orchestrator | Friday 29 August 2025 17:06:12 +0000 (0:00:01.146) 0:06:25.367 ********* 2025-08-29 17:06:24.518065 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:06:24.518074 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:24.518084 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:06:24.518093 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:06:24.518102 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:06:24.518112 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:06:24.518122 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:06:24.518132 | orchestrator | 2025-08-29 17:06:24.518149 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-08-29 17:06:24.518166 | orchestrator | Friday 29 August 2025 17:06:13 +0000 (0:00:01.360) 0:06:26.727 ********* 2025-08-29 17:06:24.518182 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:24.518199 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:06:24.518237 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:06:24.518255 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:06:24.518270 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:06:24.518287 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:06:24.518304 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:06:24.518319 | orchestrator | 2025-08-29 17:06:24.518334 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-08-29 17:06:24.518353 | orchestrator | Friday 29 August 2025 17:06:14 +0000 (0:00:01.118) 0:06:27.845 ********* 2025-08-29 17:06:24.518368 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:24.518389 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:06:24.518404 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:06:24.518419 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:06:24.518434 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:06:24.518449 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:06:24.518465 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:06:24.518480 | orchestrator | 2025-08-29 17:06:24.518498 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-08-29 17:06:24.518514 | orchestrator | Friday 29 August 2025 17:06:15 +0000 (0:00:01.153) 0:06:28.998 ********* 2025-08-29 17:06:24.518529 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:24.518544 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:06:24.518560 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:06:24.518576 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:06:24.518606 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:06:24.518623 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:06:24.518651 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:06:24.518670 | orchestrator | 2025-08-29 17:06:24.518691 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-08-29 17:06:24.518706 | orchestrator | Friday 29 August 2025 17:06:16 +0000 (0:00:01.119) 0:06:30.118 ********* 2025-08-29 17:06:24.518728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:06:24.518746 | orchestrator | 2025-08-29 17:06:24.518761 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 17:06:24.518775 | orchestrator | Friday 29 August 2025 17:06:18 +0000 (0:00:01.116) 0:06:31.234 ********* 2025-08-29 17:06:24.518791 | orchestrator | 2025-08-29 17:06:24.518810 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 17:06:24.518824 | orchestrator | Friday 29 August 2025 17:06:18 +0000 (0:00:00.042) 0:06:31.277 ********* 2025-08-29 17:06:24.518842 | orchestrator | 2025-08-29 17:06:24.518866 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 17:06:24.518892 | orchestrator | Friday 29 August 2025 17:06:18 +0000 (0:00:00.039) 0:06:31.316 ********* 2025-08-29 17:06:24.518923 | orchestrator | 2025-08-29 17:06:24.518938 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 17:06:24.518965 | orchestrator | Friday 29 August 2025 17:06:18 +0000 (0:00:00.050) 0:06:31.367 ********* 2025-08-29 17:06:24.519006 | orchestrator | 2025-08-29 17:06:24.519021 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 17:06:24.519048 | orchestrator | Friday 29 August 2025 17:06:18 +0000 (0:00:00.040) 0:06:31.408 ********* 2025-08-29 17:06:24.519064 | orchestrator | 2025-08-29 17:06:24.519087 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 17:06:24.519114 | orchestrator | Friday 29 August 2025 17:06:18 +0000 (0:00:00.038) 0:06:31.446 ********* 2025-08-29 17:06:24.519127 | orchestrator | 2025-08-29 17:06:24.519142 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 17:06:24.519163 | orchestrator | Friday 29 August 2025 17:06:18 +0000 (0:00:00.048) 0:06:31.494 ********* 2025-08-29 17:06:24.519178 | orchestrator | 2025-08-29 17:06:24.519213 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 17:06:24.519233 | orchestrator | Friday 29 August 2025 17:06:18 +0000 (0:00:00.043) 0:06:31.538 ********* 2025-08-29 17:06:24.519260 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:06:24.519274 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:06:24.519296 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:06:24.519315 | orchestrator | 2025-08-29 17:06:24.519333 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-08-29 17:06:24.519348 | orchestrator | Friday 29 August 2025 17:06:19 +0000 (0:00:01.119) 0:06:32.657 ********* 2025-08-29 17:06:24.519369 | orchestrator | changed: [testbed-manager] 2025-08-29 17:06:24.519411 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:24.519429 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:24.519447 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:24.519472 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:24.519493 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:24.519507 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:24.519523 | orchestrator | 2025-08-29 17:06:24.519549 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-08-29 17:06:24.519577 | orchestrator | Friday 29 August 2025 17:06:20 +0000 (0:00:01.289) 0:06:33.947 ********* 2025-08-29 17:06:24.519603 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:06:24.519643 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:24.519666 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:24.519685 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:24.519707 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:24.519734 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:24.519761 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:24.519776 | orchestrator | 2025-08-29 17:06:24.519791 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-08-29 17:06:24.519806 | orchestrator | Friday 29 August 2025 17:06:23 +0000 (0:00:02.556) 0:06:36.503 ********* 2025-08-29 17:06:24.519821 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:06:24.519834 | orchestrator | 2025-08-29 17:06:24.519850 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-08-29 17:06:24.519868 | orchestrator | Friday 29 August 2025 17:06:23 +0000 (0:00:00.117) 0:06:36.621 ********* 2025-08-29 17:06:24.519891 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:24.519906 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:24.519920 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:24.519933 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:24.519958 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:50.804517 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:50.804629 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:50.804645 | orchestrator | 2025-08-29 17:06:50.804657 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-08-29 17:06:50.804670 | orchestrator | Friday 29 August 2025 17:06:24 +0000 (0:00:01.003) 0:06:37.624 ********* 2025-08-29 17:06:50.804682 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:06:50.804693 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:06:50.804704 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:06:50.804715 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:06:50.804725 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:06:50.804736 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:06:50.804746 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:06:50.804757 | orchestrator | 2025-08-29 17:06:50.804768 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-08-29 17:06:50.804779 | orchestrator | Friday 29 August 2025 17:06:25 +0000 (0:00:00.542) 0:06:38.167 ********* 2025-08-29 17:06:50.804791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:06:50.804805 | orchestrator | 2025-08-29 17:06:50.804816 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-08-29 17:06:50.804826 | orchestrator | Friday 29 August 2025 17:06:26 +0000 (0:00:01.116) 0:06:39.284 ********* 2025-08-29 17:06:50.804837 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:50.804848 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:06:50.804860 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:06:50.804871 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:06:50.804881 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:06:50.804892 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:06:50.804902 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:06:50.804913 | orchestrator | 2025-08-29 17:06:50.804924 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-08-29 17:06:50.804935 | orchestrator | Friday 29 August 2025 17:06:27 +0000 (0:00:00.859) 0:06:40.143 ********* 2025-08-29 17:06:50.804946 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-08-29 17:06:50.804956 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-08-29 17:06:50.804967 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-08-29 17:06:50.805017 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-08-29 17:06:50.805027 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-08-29 17:06:50.805038 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-08-29 17:06:50.805049 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-08-29 17:06:50.805060 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-08-29 17:06:50.805072 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-08-29 17:06:50.805108 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-08-29 17:06:50.805121 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-08-29 17:06:50.805133 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-08-29 17:06:50.805145 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-08-29 17:06:50.805157 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-08-29 17:06:50.805169 | orchestrator | 2025-08-29 17:06:50.805180 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-08-29 17:06:50.805193 | orchestrator | Friday 29 August 2025 17:06:29 +0000 (0:00:02.959) 0:06:43.103 ********* 2025-08-29 17:06:50.805205 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:06:50.805217 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:06:50.805229 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:06:50.805241 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:06:50.805253 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:06:50.805266 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:06:50.805278 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:06:50.805290 | orchestrator | 2025-08-29 17:06:50.805302 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-08-29 17:06:50.805315 | orchestrator | Friday 29 August 2025 17:06:30 +0000 (0:00:00.530) 0:06:43.633 ********* 2025-08-29 17:06:50.805329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:06:50.805343 | orchestrator | 2025-08-29 17:06:50.805370 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-08-29 17:06:50.805383 | orchestrator | Friday 29 August 2025 17:06:31 +0000 (0:00:01.061) 0:06:44.694 ********* 2025-08-29 17:06:50.805395 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:50.805407 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:06:50.805419 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:06:50.805430 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:06:50.805441 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:06:50.805451 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:06:50.805462 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:06:50.805472 | orchestrator | 2025-08-29 17:06:50.805483 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-08-29 17:06:50.805494 | orchestrator | Friday 29 August 2025 17:06:32 +0000 (0:00:00.876) 0:06:45.570 ********* 2025-08-29 17:06:50.805505 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:50.805515 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:06:50.805526 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:06:50.805537 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:06:50.805547 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:06:50.805558 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:06:50.805568 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:06:50.805579 | orchestrator | 2025-08-29 17:06:50.805590 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-08-29 17:06:50.805619 | orchestrator | Friday 29 August 2025 17:06:33 +0000 (0:00:00.853) 0:06:46.424 ********* 2025-08-29 17:06:50.805630 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:06:50.805641 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:06:50.805651 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:06:50.805662 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:06:50.805673 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:06:50.805684 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:06:50.805695 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:06:50.805705 | orchestrator | 2025-08-29 17:06:50.805716 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-08-29 17:06:50.805727 | orchestrator | Friday 29 August 2025 17:06:33 +0000 (0:00:00.533) 0:06:46.958 ********* 2025-08-29 17:06:50.805750 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:50.805762 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:06:50.805772 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:06:50.805783 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:06:50.805794 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:06:50.805805 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:06:50.805815 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:06:50.805826 | orchestrator | 2025-08-29 17:06:50.805837 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-08-29 17:06:50.805848 | orchestrator | Friday 29 August 2025 17:06:35 +0000 (0:00:01.572) 0:06:48.531 ********* 2025-08-29 17:06:50.805858 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:06:50.805869 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:06:50.805880 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:06:50.805891 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:06:50.805901 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:06:50.805912 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:06:50.805923 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:06:50.805933 | orchestrator | 2025-08-29 17:06:50.805944 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-08-29 17:06:50.805955 | orchestrator | Friday 29 August 2025 17:06:35 +0000 (0:00:00.442) 0:06:48.974 ********* 2025-08-29 17:06:50.805966 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:50.805999 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:50.806010 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:50.806076 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:50.806088 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:50.806099 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:50.806110 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:50.806121 | orchestrator | 2025-08-29 17:06:50.806132 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-08-29 17:06:50.806142 | orchestrator | Friday 29 August 2025 17:06:43 +0000 (0:00:07.629) 0:06:56.604 ********* 2025-08-29 17:06:50.806153 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:50.806164 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:50.806175 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:50.806185 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:50.806196 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:50.806207 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:50.806218 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:50.806228 | orchestrator | 2025-08-29 17:06:50.806239 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-08-29 17:06:50.806250 | orchestrator | Friday 29 August 2025 17:06:44 +0000 (0:00:01.366) 0:06:57.970 ********* 2025-08-29 17:06:50.806261 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:50.806271 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:50.806282 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:50.806293 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:50.806303 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:50.806314 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:50.806325 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:50.806336 | orchestrator | 2025-08-29 17:06:50.806346 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-08-29 17:06:50.806357 | orchestrator | Friday 29 August 2025 17:06:46 +0000 (0:00:01.729) 0:06:59.700 ********* 2025-08-29 17:06:50.806368 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:50.806379 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:06:50.806389 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:06:50.806400 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:06:50.806411 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:06:50.806421 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:06:50.806432 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:06:50.806443 | orchestrator | 2025-08-29 17:06:50.806454 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 17:06:50.806472 | orchestrator | Friday 29 August 2025 17:06:48 +0000 (0:00:01.879) 0:07:01.579 ********* 2025-08-29 17:06:50.806483 | orchestrator | ok: [testbed-manager] 2025-08-29 17:06:50.806494 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:06:50.806505 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:06:50.806516 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:06:50.806526 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:06:50.806537 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:06:50.806548 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:06:50.806558 | orchestrator | 2025-08-29 17:06:50.806575 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 17:06:50.806587 | orchestrator | Friday 29 August 2025 17:06:49 +0000 (0:00:00.821) 0:07:02.401 ********* 2025-08-29 17:06:50.806598 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:06:50.806608 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:06:50.806619 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:06:50.806630 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:06:50.806641 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:06:50.806651 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:06:50.806662 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:06:50.806673 | orchestrator | 2025-08-29 17:06:50.806684 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-08-29 17:06:50.806695 | orchestrator | Friday 29 August 2025 17:06:50 +0000 (0:00:01.028) 0:07:03.430 ********* 2025-08-29 17:06:50.806706 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:06:50.806716 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:06:50.806727 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:06:50.806738 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:06:50.806749 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:06:50.806774 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:06:50.806785 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:06:50.806807 | orchestrator | 2025-08-29 17:06:50.806826 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-08-29 17:07:23.031924 | orchestrator | Friday 29 August 2025 17:06:50 +0000 (0:00:00.485) 0:07:03.915 ********* 2025-08-29 17:07:23.032045 | orchestrator | ok: [testbed-manager] 2025-08-29 17:07:23.032059 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:07:23.032069 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:07:23.032077 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:07:23.032086 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:07:23.032094 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:07:23.032102 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:07:23.032110 | orchestrator | 2025-08-29 17:07:23.032119 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-08-29 17:07:23.032128 | orchestrator | Friday 29 August 2025 17:06:51 +0000 (0:00:00.488) 0:07:04.403 ********* 2025-08-29 17:07:23.032136 | orchestrator | ok: [testbed-manager] 2025-08-29 17:07:23.032144 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:07:23.032152 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:07:23.032160 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:07:23.032168 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:07:23.032175 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:07:23.032183 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:07:23.032191 | orchestrator | 2025-08-29 17:07:23.032199 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-08-29 17:07:23.032208 | orchestrator | Friday 29 August 2025 17:06:51 +0000 (0:00:00.455) 0:07:04.859 ********* 2025-08-29 17:07:23.032216 | orchestrator | ok: [testbed-manager] 2025-08-29 17:07:23.032223 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:07:23.032231 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:07:23.032239 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:07:23.032247 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:07:23.032255 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:07:23.032263 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:07:23.032271 | orchestrator | 2025-08-29 17:07:23.032279 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-08-29 17:07:23.032307 | orchestrator | Friday 29 August 2025 17:06:52 +0000 (0:00:00.493) 0:07:05.352 ********* 2025-08-29 17:07:23.032315 | orchestrator | ok: [testbed-manager] 2025-08-29 17:07:23.032323 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:07:23.032331 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:07:23.032339 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:07:23.032346 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:07:23.032354 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:07:23.032362 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:07:23.032369 | orchestrator | 2025-08-29 17:07:23.032377 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-08-29 17:07:23.032385 | orchestrator | Friday 29 August 2025 17:06:57 +0000 (0:00:05.576) 0:07:10.929 ********* 2025-08-29 17:07:23.032393 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:07:23.032402 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:07:23.032409 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:07:23.032417 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:07:23.032425 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:07:23.032433 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:07:23.032440 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:07:23.032448 | orchestrator | 2025-08-29 17:07:23.032456 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-08-29 17:07:23.032464 | orchestrator | Friday 29 August 2025 17:06:58 +0000 (0:00:00.567) 0:07:11.496 ********* 2025-08-29 17:07:23.032473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:07:23.032483 | orchestrator | 2025-08-29 17:07:23.032493 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-08-29 17:07:23.032503 | orchestrator | Friday 29 August 2025 17:06:59 +0000 (0:00:00.915) 0:07:12.411 ********* 2025-08-29 17:07:23.032512 | orchestrator | ok: [testbed-manager] 2025-08-29 17:07:23.032521 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:07:23.032530 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:07:23.032539 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:07:23.032548 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:07:23.032557 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:07:23.032566 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:07:23.032574 | orchestrator | 2025-08-29 17:07:23.032583 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-08-29 17:07:23.032592 | orchestrator | Friday 29 August 2025 17:07:01 +0000 (0:00:01.981) 0:07:14.393 ********* 2025-08-29 17:07:23.032601 | orchestrator | ok: [testbed-manager] 2025-08-29 17:07:23.032610 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:07:23.032619 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:07:23.032628 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:07:23.032637 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:07:23.032646 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:07:23.032655 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:07:23.032664 | orchestrator | 2025-08-29 17:07:23.032674 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-08-29 17:07:23.032683 | orchestrator | Friday 29 August 2025 17:07:02 +0000 (0:00:01.115) 0:07:15.508 ********* 2025-08-29 17:07:23.032692 | orchestrator | ok: [testbed-manager] 2025-08-29 17:07:23.032701 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:07:23.032710 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:07:23.032720 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:07:23.032728 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:07:23.032738 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:07:23.032747 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:07:23.032755 | orchestrator | 2025-08-29 17:07:23.032765 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-08-29 17:07:23.032774 | orchestrator | Friday 29 August 2025 17:07:03 +0000 (0:00:00.963) 0:07:16.472 ********* 2025-08-29 17:07:23.032790 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 17:07:23.032800 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 17:07:23.032810 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 17:07:23.032833 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 17:07:23.032842 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 17:07:23.032850 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 17:07:23.032858 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 17:07:23.032866 | orchestrator | 2025-08-29 17:07:23.032873 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-08-29 17:07:23.032881 | orchestrator | Friday 29 August 2025 17:07:05 +0000 (0:00:01.744) 0:07:18.216 ********* 2025-08-29 17:07:23.032890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:07:23.032898 | orchestrator | 2025-08-29 17:07:23.032906 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-08-29 17:07:23.032914 | orchestrator | Friday 29 August 2025 17:07:06 +0000 (0:00:01.081) 0:07:19.298 ********* 2025-08-29 17:07:23.032921 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:07:23.032929 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:07:23.032937 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:07:23.032945 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:07:23.032953 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:07:23.032976 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:07:23.032984 | orchestrator | changed: [testbed-manager] 2025-08-29 17:07:23.032992 | orchestrator | 2025-08-29 17:07:23.033000 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-08-29 17:07:23.033008 | orchestrator | Friday 29 August 2025 17:07:14 +0000 (0:00:08.729) 0:07:28.028 ********* 2025-08-29 17:07:23.033016 | orchestrator | ok: [testbed-manager] 2025-08-29 17:07:23.033024 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:07:23.033032 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:07:23.033040 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:07:23.033048 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:07:23.033055 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:07:23.033063 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:07:23.033071 | orchestrator | 2025-08-29 17:07:23.033079 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-08-29 17:07:23.033087 | orchestrator | Friday 29 August 2025 17:07:16 +0000 (0:00:02.013) 0:07:30.041 ********* 2025-08-29 17:07:23.033095 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:07:23.033102 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:07:23.033110 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:07:23.033118 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:07:23.033126 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:07:23.033133 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:07:23.033141 | orchestrator | 2025-08-29 17:07:23.033149 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-08-29 17:07:23.033197 | orchestrator | Friday 29 August 2025 17:07:18 +0000 (0:00:01.290) 0:07:31.332 ********* 2025-08-29 17:07:23.033213 | orchestrator | changed: [testbed-manager] 2025-08-29 17:07:23.033221 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:07:23.033228 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:07:23.033236 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:07:23.033244 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:07:23.033252 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:07:23.033260 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:07:23.033268 | orchestrator | 2025-08-29 17:07:23.033275 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-08-29 17:07:23.033283 | orchestrator | 2025-08-29 17:07:23.033291 | orchestrator | TASK [Include hardening role] ************************************************** 2025-08-29 17:07:23.033299 | orchestrator | Friday 29 August 2025 17:07:19 +0000 (0:00:01.250) 0:07:32.583 ********* 2025-08-29 17:07:23.033307 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:07:23.033315 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:07:23.033323 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:07:23.033330 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:07:23.033338 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:07:23.033346 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:07:23.033354 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:07:23.033362 | orchestrator | 2025-08-29 17:07:23.033373 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-08-29 17:07:23.033381 | orchestrator | 2025-08-29 17:07:23.033389 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-08-29 17:07:23.033397 | orchestrator | Friday 29 August 2025 17:07:19 +0000 (0:00:00.515) 0:07:33.098 ********* 2025-08-29 17:07:23.033405 | orchestrator | changed: [testbed-manager] 2025-08-29 17:07:23.033413 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:07:23.033421 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:07:23.033428 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:07:23.033436 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:07:23.033444 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:07:23.033452 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:07:23.033459 | orchestrator | 2025-08-29 17:07:23.033467 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-08-29 17:07:23.033475 | orchestrator | Friday 29 August 2025 17:07:21 +0000 (0:00:01.321) 0:07:34.420 ********* 2025-08-29 17:07:23.033483 | orchestrator | ok: [testbed-manager] 2025-08-29 17:07:23.033491 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:07:23.033499 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:07:23.033507 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:07:23.033515 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:07:23.033522 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:07:23.033530 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:07:23.033538 | orchestrator | 2025-08-29 17:07:23.033546 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-08-29 17:07:23.033559 | orchestrator | Friday 29 August 2025 17:07:23 +0000 (0:00:01.715) 0:07:36.135 ********* 2025-08-29 17:07:46.486335 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:07:46.486445 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:07:46.486460 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:07:46.486472 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:07:46.486484 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:07:46.486495 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:07:46.486506 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:07:46.486518 | orchestrator | 2025-08-29 17:07:46.486530 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-08-29 17:07:46.486542 | orchestrator | Friday 29 August 2025 17:07:23 +0000 (0:00:00.526) 0:07:36.661 ********* 2025-08-29 17:07:46.486553 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:07:46.486566 | orchestrator | 2025-08-29 17:07:46.486577 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-08-29 17:07:46.486611 | orchestrator | Friday 29 August 2025 17:07:24 +0000 (0:00:01.015) 0:07:37.677 ********* 2025-08-29 17:07:46.486624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:07:46.486638 | orchestrator | 2025-08-29 17:07:46.486649 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-08-29 17:07:46.486660 | orchestrator | Friday 29 August 2025 17:07:25 +0000 (0:00:00.805) 0:07:38.483 ********* 2025-08-29 17:07:46.486671 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:07:46.486682 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:07:46.486693 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:07:46.486703 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:07:46.486714 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:07:46.486725 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:07:46.486735 | orchestrator | changed: [testbed-manager] 2025-08-29 17:07:46.486746 | orchestrator | 2025-08-29 17:07:46.486757 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-08-29 17:07:46.486768 | orchestrator | Friday 29 August 2025 17:07:33 +0000 (0:00:08.011) 0:07:46.494 ********* 2025-08-29 17:07:46.486779 | orchestrator | changed: [testbed-manager] 2025-08-29 17:07:46.486789 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:07:46.486800 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:07:46.486811 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:07:46.486821 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:07:46.486832 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:07:46.486842 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:07:46.486853 | orchestrator | 2025-08-29 17:07:46.486864 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-08-29 17:07:46.486875 | orchestrator | Friday 29 August 2025 17:07:34 +0000 (0:00:00.885) 0:07:47.380 ********* 2025-08-29 17:07:46.486888 | orchestrator | changed: [testbed-manager] 2025-08-29 17:07:46.486900 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:07:46.486913 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:07:46.486925 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:07:46.486938 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:07:46.486974 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:07:46.486986 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:07:46.486998 | orchestrator | 2025-08-29 17:07:46.487010 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-08-29 17:07:46.487022 | orchestrator | Friday 29 August 2025 17:07:35 +0000 (0:00:01.572) 0:07:48.952 ********* 2025-08-29 17:07:46.487034 | orchestrator | changed: [testbed-manager] 2025-08-29 17:07:46.487046 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:07:46.487058 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:07:46.487071 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:07:46.487083 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:07:46.487094 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:07:46.487106 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:07:46.487118 | orchestrator | 2025-08-29 17:07:46.487131 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-08-29 17:07:46.487143 | orchestrator | Friday 29 August 2025 17:07:37 +0000 (0:00:01.787) 0:07:50.740 ********* 2025-08-29 17:07:46.487155 | orchestrator | changed: [testbed-manager] 2025-08-29 17:07:46.487166 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:07:46.487178 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:07:46.487190 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:07:46.487203 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:07:46.487214 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:07:46.487227 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:07:46.487238 | orchestrator | 2025-08-29 17:07:46.487249 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-08-29 17:07:46.487268 | orchestrator | Friday 29 August 2025 17:07:38 +0000 (0:00:01.204) 0:07:51.945 ********* 2025-08-29 17:07:46.487278 | orchestrator | changed: [testbed-manager] 2025-08-29 17:07:46.487289 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:07:46.487300 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:07:46.487310 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:07:46.487321 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:07:46.487332 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:07:46.487342 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:07:46.487353 | orchestrator | 2025-08-29 17:07:46.487364 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-08-29 17:07:46.487374 | orchestrator | 2025-08-29 17:07:46.487385 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-08-29 17:07:46.487396 | orchestrator | Friday 29 August 2025 17:07:40 +0000 (0:00:01.333) 0:07:53.278 ********* 2025-08-29 17:07:46.487407 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:07:46.487418 | orchestrator | 2025-08-29 17:07:46.487428 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-08-29 17:07:46.487454 | orchestrator | Friday 29 August 2025 17:07:40 +0000 (0:00:00.829) 0:07:54.108 ********* 2025-08-29 17:07:46.487466 | orchestrator | ok: [testbed-manager] 2025-08-29 17:07:46.487478 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:07:46.487488 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:07:46.487499 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:07:46.487510 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:07:46.487520 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:07:46.487531 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:07:46.487541 | orchestrator | 2025-08-29 17:07:46.487552 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-08-29 17:07:46.487563 | orchestrator | Friday 29 August 2025 17:07:41 +0000 (0:00:00.823) 0:07:54.931 ********* 2025-08-29 17:07:46.487574 | orchestrator | changed: [testbed-manager] 2025-08-29 17:07:46.487585 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:07:46.487596 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:07:46.487606 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:07:46.487617 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:07:46.487627 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:07:46.487638 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:07:46.487649 | orchestrator | 2025-08-29 17:07:46.487659 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-08-29 17:07:46.487670 | orchestrator | Friday 29 August 2025 17:07:43 +0000 (0:00:01.363) 0:07:56.295 ********* 2025-08-29 17:07:46.487681 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:07:46.487692 | orchestrator | 2025-08-29 17:07:46.487702 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-08-29 17:07:46.487713 | orchestrator | Friday 29 August 2025 17:07:44 +0000 (0:00:00.913) 0:07:57.209 ********* 2025-08-29 17:07:46.487724 | orchestrator | ok: [testbed-manager] 2025-08-29 17:07:46.487734 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:07:46.487745 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:07:46.487756 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:07:46.487766 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:07:46.487777 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:07:46.487787 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:07:46.487798 | orchestrator | 2025-08-29 17:07:46.487809 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-08-29 17:07:46.487820 | orchestrator | Friday 29 August 2025 17:07:45 +0000 (0:00:00.988) 0:07:58.197 ********* 2025-08-29 17:07:46.487830 | orchestrator | changed: [testbed-manager] 2025-08-29 17:07:46.487841 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:07:46.487859 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:07:46.487869 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:07:46.487880 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:07:46.487891 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:07:46.487901 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:07:46.487912 | orchestrator | 2025-08-29 17:07:46.487923 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:07:46.487935 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-08-29 17:07:46.487984 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-08-29 17:07:46.487997 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 17:07:46.488008 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 17:07:46.488019 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 17:07:46.488030 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 17:07:46.488041 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 17:07:46.488051 | orchestrator | 2025-08-29 17:07:46.488062 | orchestrator | 2025-08-29 17:07:46.488073 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:07:46.488096 | orchestrator | Friday 29 August 2025 17:07:46 +0000 (0:00:01.380) 0:07:59.577 ********* 2025-08-29 17:07:46.488108 | orchestrator | =============================================================================== 2025-08-29 17:07:46.488119 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.66s 2025-08-29 17:07:46.488130 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.70s 2025-08-29 17:07:46.488140 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.51s 2025-08-29 17:07:46.488151 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.71s 2025-08-29 17:07:46.488162 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.46s 2025-08-29 17:07:46.488173 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.40s 2025-08-29 17:07:46.488185 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.51s 2025-08-29 17:07:46.488195 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.16s 2025-08-29 17:07:46.488206 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.73s 2025-08-29 17:07:46.488217 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.42s 2025-08-29 17:07:46.488234 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.33s 2025-08-29 17:07:47.001847 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.22s 2025-08-29 17:07:47.001936 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.01s 2025-08-29 17:07:47.001986 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.63s 2025-08-29 17:07:47.001999 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.56s 2025-08-29 17:07:47.002010 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.55s 2025-08-29 17:07:47.002067 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.40s 2025-08-29 17:07:47.002079 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.03s 2025-08-29 17:07:47.002115 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.77s 2025-08-29 17:07:47.002127 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.58s 2025-08-29 17:07:47.340303 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-08-29 17:07:47.340373 | orchestrator | + osism apply network 2025-08-29 17:08:00.355385 | orchestrator | 2025-08-29 17:08:00 | INFO  | Task 7c0878d3-172c-4fc8-8d91-f12ef9ab23af (network) was prepared for execution. 2025-08-29 17:08:00.355499 | orchestrator | 2025-08-29 17:08:00 | INFO  | It takes a moment until task 7c0878d3-172c-4fc8-8d91-f12ef9ab23af (network) has been started and output is visible here. 2025-08-29 17:08:30.647885 | orchestrator | 2025-08-29 17:08:30.648031 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-08-29 17:08:30.648049 | orchestrator | 2025-08-29 17:08:30.648061 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-08-29 17:08:30.648072 | orchestrator | Friday 29 August 2025 17:08:04 +0000 (0:00:00.277) 0:00:00.277 ********* 2025-08-29 17:08:30.648083 | orchestrator | ok: [testbed-manager] 2025-08-29 17:08:30.648095 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:08:30.648106 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:08:30.648117 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:08:30.648127 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:08:30.648138 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:08:30.648149 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:08:30.648160 | orchestrator | 2025-08-29 17:08:30.648171 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-08-29 17:08:30.648182 | orchestrator | Friday 29 August 2025 17:08:05 +0000 (0:00:00.727) 0:00:01.005 ********* 2025-08-29 17:08:30.648195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:08:30.648209 | orchestrator | 2025-08-29 17:08:30.648220 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-08-29 17:08:30.648230 | orchestrator | Friday 29 August 2025 17:08:06 +0000 (0:00:01.275) 0:00:02.280 ********* 2025-08-29 17:08:30.648241 | orchestrator | ok: [testbed-manager] 2025-08-29 17:08:30.648252 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:08:30.648263 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:08:30.648273 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:08:30.648284 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:08:30.648294 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:08:30.648305 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:08:30.648316 | orchestrator | 2025-08-29 17:08:30.648327 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-08-29 17:08:30.648338 | orchestrator | Friday 29 August 2025 17:08:08 +0000 (0:00:01.968) 0:00:04.249 ********* 2025-08-29 17:08:30.648349 | orchestrator | ok: [testbed-manager] 2025-08-29 17:08:30.648359 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:08:30.648370 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:08:30.648380 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:08:30.648391 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:08:30.648410 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:08:30.648429 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:08:30.648448 | orchestrator | 2025-08-29 17:08:30.648466 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-08-29 17:08:30.648484 | orchestrator | Friday 29 August 2025 17:08:10 +0000 (0:00:01.725) 0:00:05.975 ********* 2025-08-29 17:08:30.648503 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-08-29 17:08:30.648525 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-08-29 17:08:30.648562 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-08-29 17:08:30.648575 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-08-29 17:08:30.648586 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-08-29 17:08:30.648618 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-08-29 17:08:30.648629 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-08-29 17:08:30.648640 | orchestrator | 2025-08-29 17:08:30.648651 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-08-29 17:08:30.648661 | orchestrator | Friday 29 August 2025 17:08:11 +0000 (0:00:01.017) 0:00:06.992 ********* 2025-08-29 17:08:30.648672 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 17:08:30.648684 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 17:08:30.648694 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:08:30.648705 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 17:08:30.648715 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 17:08:30.648726 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 17:08:30.648736 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 17:08:30.648747 | orchestrator | 2025-08-29 17:08:30.648758 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-08-29 17:08:30.648768 | orchestrator | Friday 29 August 2025 17:08:14 +0000 (0:00:03.456) 0:00:10.449 ********* 2025-08-29 17:08:30.648779 | orchestrator | changed: [testbed-manager] 2025-08-29 17:08:30.648790 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:08:30.648801 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:08:30.648811 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:08:30.648822 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:08:30.648833 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:08:30.648844 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:08:30.648854 | orchestrator | 2025-08-29 17:08:30.648865 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-08-29 17:08:30.648876 | orchestrator | Friday 29 August 2025 17:08:16 +0000 (0:00:01.454) 0:00:11.904 ********* 2025-08-29 17:08:30.648886 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:08:30.648897 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 17:08:30.648908 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 17:08:30.648918 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 17:08:30.648929 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 17:08:30.648939 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 17:08:30.648950 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 17:08:30.648985 | orchestrator | 2025-08-29 17:08:30.648996 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-08-29 17:08:30.649007 | orchestrator | Friday 29 August 2025 17:08:18 +0000 (0:00:01.994) 0:00:13.898 ********* 2025-08-29 17:08:30.649017 | orchestrator | ok: [testbed-manager] 2025-08-29 17:08:30.649028 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:08:30.649039 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:08:30.649049 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:08:30.649060 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:08:30.649071 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:08:30.649081 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:08:30.649092 | orchestrator | 2025-08-29 17:08:30.649103 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-08-29 17:08:30.649133 | orchestrator | Friday 29 August 2025 17:08:19 +0000 (0:00:01.168) 0:00:15.067 ********* 2025-08-29 17:08:30.649144 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:08:30.649155 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:08:30.649166 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:08:30.649176 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:08:30.649187 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:08:30.649197 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:08:30.649208 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:08:30.649219 | orchestrator | 2025-08-29 17:08:30.649230 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-08-29 17:08:30.649240 | orchestrator | Friday 29 August 2025 17:08:20 +0000 (0:00:00.747) 0:00:15.815 ********* 2025-08-29 17:08:30.649259 | orchestrator | ok: [testbed-manager] 2025-08-29 17:08:30.649270 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:08:30.649281 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:08:30.649291 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:08:30.649302 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:08:30.649313 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:08:30.649323 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:08:30.649334 | orchestrator | 2025-08-29 17:08:30.649345 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-08-29 17:08:30.649355 | orchestrator | Friday 29 August 2025 17:08:22 +0000 (0:00:02.140) 0:00:17.955 ********* 2025-08-29 17:08:30.649366 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:08:30.649377 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:08:30.649388 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:08:30.649404 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:08:30.649422 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:08:30.649440 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:08:30.649459 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-08-29 17:08:30.649481 | orchestrator | 2025-08-29 17:08:30.649499 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-08-29 17:08:30.649519 | orchestrator | Friday 29 August 2025 17:08:23 +0000 (0:00:00.934) 0:00:18.890 ********* 2025-08-29 17:08:30.649537 | orchestrator | ok: [testbed-manager] 2025-08-29 17:08:30.649553 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:08:30.649565 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:08:30.649575 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:08:30.649586 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:08:30.649596 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:08:30.649607 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:08:30.649618 | orchestrator | 2025-08-29 17:08:30.649628 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-08-29 17:08:30.649639 | orchestrator | Friday 29 August 2025 17:08:26 +0000 (0:00:02.674) 0:00:21.564 ********* 2025-08-29 17:08:30.649657 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:08:30.649670 | orchestrator | 2025-08-29 17:08:30.649680 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-08-29 17:08:30.649691 | orchestrator | Friday 29 August 2025 17:08:27 +0000 (0:00:01.319) 0:00:22.884 ********* 2025-08-29 17:08:30.649701 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:08:30.649712 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:08:30.649722 | orchestrator | ok: [testbed-manager] 2025-08-29 17:08:30.649733 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:08:30.649744 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:08:30.649754 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:08:30.649765 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:08:30.649775 | orchestrator | 2025-08-29 17:08:30.649786 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-08-29 17:08:30.649797 | orchestrator | Friday 29 August 2025 17:08:28 +0000 (0:00:01.159) 0:00:24.044 ********* 2025-08-29 17:08:30.649807 | orchestrator | ok: [testbed-manager] 2025-08-29 17:08:30.649818 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:08:30.649828 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:08:30.649839 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:08:30.649850 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:08:30.649860 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:08:30.649871 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:08:30.649881 | orchestrator | 2025-08-29 17:08:30.649892 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-08-29 17:08:30.649902 | orchestrator | Friday 29 August 2025 17:08:29 +0000 (0:00:00.884) 0:00:24.928 ********* 2025-08-29 17:08:30.649921 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 17:08:30.649932 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 17:08:30.649943 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 17:08:30.649992 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 17:08:30.650005 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 17:08:30.650052 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 17:08:30.650066 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 17:08:30.650077 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 17:08:30.650088 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 17:08:30.650099 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 17:08:30.650138 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 17:08:30.650151 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 17:08:30.650161 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 17:08:30.650172 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 17:08:30.650183 | orchestrator | 2025-08-29 17:08:30.650204 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-08-29 17:08:48.469576 | orchestrator | Friday 29 August 2025 17:08:30 +0000 (0:00:01.216) 0:00:26.144 ********* 2025-08-29 17:08:48.469691 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:08:48.469707 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:08:48.469719 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:08:48.469730 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:08:48.469741 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:08:48.469752 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:08:48.469763 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:08:48.469774 | orchestrator | 2025-08-29 17:08:48.469786 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-08-29 17:08:48.469798 | orchestrator | Friday 29 August 2025 17:08:31 +0000 (0:00:00.658) 0:00:26.803 ********* 2025-08-29 17:08:48.469811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-0, testbed-manager, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2025-08-29 17:08:48.469825 | orchestrator | 2025-08-29 17:08:48.469836 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-08-29 17:08:48.469847 | orchestrator | Friday 29 August 2025 17:08:36 +0000 (0:00:05.058) 0:00:31.862 ********* 2025-08-29 17:08:48.469860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:08:48.469872 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:08:48.469884 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:08:48.469896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:08:48.469930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:08:48.469941 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:08:48.469952 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:08:48.470007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:08:48.470191 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:08:48.470209 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:08:48.470233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:08:48.470266 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:08:48.470280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:08:48.470293 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:08:48.470306 | orchestrator | 2025-08-29 17:08:48.470318 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-08-29 17:08:48.470331 | orchestrator | Friday 29 August 2025 17:08:42 +0000 (0:00:06.087) 0:00:37.950 ********* 2025-08-29 17:08:48.470343 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:08:48.470356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:08:48.470369 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:08:48.470392 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:08:48.470411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:08:48.470423 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:08:48.470437 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:08:48.470449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:08:48.470462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:08:48.470474 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:08:48.470486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:08:48.470497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:08:48.470519 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:08:54.850637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:08:54.850738 | orchestrator | 2025-08-29 17:08:54.850755 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-08-29 17:08:54.850768 | orchestrator | Friday 29 August 2025 17:08:48 +0000 (0:00:06.017) 0:00:43.967 ********* 2025-08-29 17:08:54.850782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:08:54.850794 | orchestrator | 2025-08-29 17:08:54.850805 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-08-29 17:08:54.850817 | orchestrator | Friday 29 August 2025 17:08:49 +0000 (0:00:01.282) 0:00:45.249 ********* 2025-08-29 17:08:54.850853 | orchestrator | ok: [testbed-manager] 2025-08-29 17:08:54.850866 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:08:54.850877 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:08:54.850888 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:08:54.850898 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:08:54.850909 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:08:54.850920 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:08:54.850931 | orchestrator | 2025-08-29 17:08:54.850942 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-08-29 17:08:54.850976 | orchestrator | Friday 29 August 2025 17:08:50 +0000 (0:00:01.162) 0:00:46.412 ********* 2025-08-29 17:08:54.850990 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 17:08:54.851003 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 17:08:54.851013 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 17:08:54.851024 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 17:08:54.851035 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 17:08:54.851046 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 17:08:54.851071 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 17:08:54.851082 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 17:08:54.851093 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:08:54.851105 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 17:08:54.851115 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 17:08:54.851126 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 17:08:54.851137 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 17:08:54.851147 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:08:54.851158 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 17:08:54.851171 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 17:08:54.851183 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 17:08:54.851195 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 17:08:54.851207 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:08:54.851219 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 17:08:54.851231 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 17:08:54.851243 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 17:08:54.851255 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 17:08:54.851267 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:08:54.851279 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 17:08:54.851291 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 17:08:54.851303 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 17:08:54.851315 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 17:08:54.851327 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:08:54.851340 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:08:54.851352 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 17:08:54.851364 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 17:08:54.851385 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 17:08:54.851397 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 17:08:54.851409 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:08:54.851420 | orchestrator | 2025-08-29 17:08:54.851433 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-08-29 17:08:54.851462 | orchestrator | Friday 29 August 2025 17:08:53 +0000 (0:00:02.143) 0:00:48.556 ********* 2025-08-29 17:08:54.851474 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:08:54.851487 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:08:54.851499 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:08:54.851511 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:08:54.851522 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:08:54.851533 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:08:54.851544 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:08:54.851555 | orchestrator | 2025-08-29 17:08:54.851566 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-08-29 17:08:54.851576 | orchestrator | Friday 29 August 2025 17:08:53 +0000 (0:00:00.637) 0:00:49.194 ********* 2025-08-29 17:08:54.851587 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:08:54.851598 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:08:54.851608 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:08:54.851619 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:08:54.851630 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:08:54.851640 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:08:54.851651 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:08:54.851662 | orchestrator | 2025-08-29 17:08:54.851673 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:08:54.851684 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 17:08:54.851696 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:08:54.851707 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:08:54.851717 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:08:54.851728 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:08:54.851739 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:08:54.851755 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:08:54.851766 | orchestrator | 2025-08-29 17:08:54.851777 | orchestrator | 2025-08-29 17:08:54.851789 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:08:54.851799 | orchestrator | Friday 29 August 2025 17:08:54 +0000 (0:00:00.742) 0:00:49.936 ********* 2025-08-29 17:08:54.851810 | orchestrator | =============================================================================== 2025-08-29 17:08:54.851821 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.09s 2025-08-29 17:08:54.851832 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.02s 2025-08-29 17:08:54.851843 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 5.06s 2025-08-29 17:08:54.851853 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.46s 2025-08-29 17:08:54.851870 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 2.67s 2025-08-29 17:08:54.851881 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.14s 2025-08-29 17:08:54.851892 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.14s 2025-08-29 17:08:54.851903 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.99s 2025-08-29 17:08:54.851913 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.97s 2025-08-29 17:08:54.851924 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.73s 2025-08-29 17:08:54.851935 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.45s 2025-08-29 17:08:54.851946 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.32s 2025-08-29 17:08:54.851999 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.28s 2025-08-29 17:08:54.852012 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.28s 2025-08-29 17:08:54.852024 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.22s 2025-08-29 17:08:54.852035 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.17s 2025-08-29 17:08:54.852047 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2025-08-29 17:08:54.852059 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2025-08-29 17:08:54.852071 | orchestrator | osism.commons.network : Create required directories --------------------- 1.02s 2025-08-29 17:08:54.852083 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.93s 2025-08-29 17:08:55.175385 | orchestrator | + osism apply wireguard 2025-08-29 17:09:07.303148 | orchestrator | 2025-08-29 17:09:07 | INFO  | Task 0a45a2f2-5476-4b91-ad05-f6c680717799 (wireguard) was prepared for execution. 2025-08-29 17:09:07.303252 | orchestrator | 2025-08-29 17:09:07 | INFO  | It takes a moment until task 0a45a2f2-5476-4b91-ad05-f6c680717799 (wireguard) has been started and output is visible here. 2025-08-29 17:09:28.153102 | orchestrator | 2025-08-29 17:09:28.153203 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-08-29 17:09:28.153218 | orchestrator | 2025-08-29 17:09:28.153229 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-08-29 17:09:28.153239 | orchestrator | Friday 29 August 2025 17:09:11 +0000 (0:00:00.241) 0:00:00.241 ********* 2025-08-29 17:09:28.153250 | orchestrator | ok: [testbed-manager] 2025-08-29 17:09:28.153261 | orchestrator | 2025-08-29 17:09:28.153271 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-08-29 17:09:28.153281 | orchestrator | Friday 29 August 2025 17:09:13 +0000 (0:00:01.679) 0:00:01.921 ********* 2025-08-29 17:09:28.153290 | orchestrator | changed: [testbed-manager] 2025-08-29 17:09:28.153301 | orchestrator | 2025-08-29 17:09:28.153311 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-08-29 17:09:28.153321 | orchestrator | Friday 29 August 2025 17:09:20 +0000 (0:00:07.024) 0:00:08.945 ********* 2025-08-29 17:09:28.153330 | orchestrator | changed: [testbed-manager] 2025-08-29 17:09:28.153340 | orchestrator | 2025-08-29 17:09:28.153350 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-08-29 17:09:28.153360 | orchestrator | Friday 29 August 2025 17:09:20 +0000 (0:00:00.601) 0:00:09.547 ********* 2025-08-29 17:09:28.153369 | orchestrator | changed: [testbed-manager] 2025-08-29 17:09:28.153379 | orchestrator | 2025-08-29 17:09:28.153389 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-08-29 17:09:28.153398 | orchestrator | Friday 29 August 2025 17:09:21 +0000 (0:00:00.414) 0:00:09.962 ********* 2025-08-29 17:09:28.153408 | orchestrator | ok: [testbed-manager] 2025-08-29 17:09:28.153418 | orchestrator | 2025-08-29 17:09:28.153428 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-08-29 17:09:28.153438 | orchestrator | Friday 29 August 2025 17:09:21 +0000 (0:00:00.560) 0:00:10.523 ********* 2025-08-29 17:09:28.153473 | orchestrator | ok: [testbed-manager] 2025-08-29 17:09:28.153483 | orchestrator | 2025-08-29 17:09:28.153493 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-08-29 17:09:28.153503 | orchestrator | Friday 29 August 2025 17:09:22 +0000 (0:00:00.571) 0:00:11.095 ********* 2025-08-29 17:09:28.153513 | orchestrator | ok: [testbed-manager] 2025-08-29 17:09:28.153523 | orchestrator | 2025-08-29 17:09:28.153533 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-08-29 17:09:28.153543 | orchestrator | Friday 29 August 2025 17:09:22 +0000 (0:00:00.437) 0:00:11.533 ********* 2025-08-29 17:09:28.153553 | orchestrator | changed: [testbed-manager] 2025-08-29 17:09:28.153562 | orchestrator | 2025-08-29 17:09:28.153586 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-08-29 17:09:28.153596 | orchestrator | Friday 29 August 2025 17:09:24 +0000 (0:00:01.259) 0:00:12.792 ********* 2025-08-29 17:09:28.153606 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 17:09:28.153616 | orchestrator | changed: [testbed-manager] 2025-08-29 17:09:28.153626 | orchestrator | 2025-08-29 17:09:28.153636 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-08-29 17:09:28.153646 | orchestrator | Friday 29 August 2025 17:09:25 +0000 (0:00:01.039) 0:00:13.832 ********* 2025-08-29 17:09:28.153681 | orchestrator | changed: [testbed-manager] 2025-08-29 17:09:28.153693 | orchestrator | 2025-08-29 17:09:28.153704 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-08-29 17:09:28.153715 | orchestrator | Friday 29 August 2025 17:09:26 +0000 (0:00:01.746) 0:00:15.579 ********* 2025-08-29 17:09:28.153727 | orchestrator | changed: [testbed-manager] 2025-08-29 17:09:28.153738 | orchestrator | 2025-08-29 17:09:28.153748 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:09:28.153760 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:09:28.153772 | orchestrator | 2025-08-29 17:09:28.153783 | orchestrator | 2025-08-29 17:09:28.153794 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:09:28.153806 | orchestrator | Friday 29 August 2025 17:09:27 +0000 (0:00:00.977) 0:00:16.556 ********* 2025-08-29 17:09:28.153817 | orchestrator | =============================================================================== 2025-08-29 17:09:28.153828 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.02s 2025-08-29 17:09:28.153839 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.75s 2025-08-29 17:09:28.153850 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.68s 2025-08-29 17:09:28.153861 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.26s 2025-08-29 17:09:28.153872 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.04s 2025-08-29 17:09:28.153883 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.98s 2025-08-29 17:09:28.153894 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.60s 2025-08-29 17:09:28.153905 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.57s 2025-08-29 17:09:28.153916 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.56s 2025-08-29 17:09:28.153927 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2025-08-29 17:09:28.153938 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2025-08-29 17:09:28.469790 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-08-29 17:09:28.509498 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-08-29 17:09:28.509602 | orchestrator | Dload Upload Total Spent Left Speed 2025-08-29 17:09:28.602699 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 161 0 --:--:-- --:--:-- --:--:-- 163 2025-08-29 17:09:28.617576 | orchestrator | + osism apply --environment custom workarounds 2025-08-29 17:09:30.624918 | orchestrator | 2025-08-29 17:09:30 | INFO  | Trying to run play workarounds in environment custom 2025-08-29 17:09:40.733480 | orchestrator | 2025-08-29 17:09:40 | INFO  | Task ac5762b2-60e8-4e49-a9ba-938137866a6e (workarounds) was prepared for execution. 2025-08-29 17:09:40.733576 | orchestrator | 2025-08-29 17:09:40 | INFO  | It takes a moment until task ac5762b2-60e8-4e49-a9ba-938137866a6e (workarounds) has been started and output is visible here. 2025-08-29 17:10:06.051545 | orchestrator | 2025-08-29 17:10:06.051662 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:10:06.051680 | orchestrator | 2025-08-29 17:10:06.051692 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-08-29 17:10:06.051704 | orchestrator | Friday 29 August 2025 17:09:44 +0000 (0:00:00.157) 0:00:00.157 ********* 2025-08-29 17:10:06.051716 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-08-29 17:10:06.051727 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-08-29 17:10:06.051738 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-08-29 17:10:06.051749 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-08-29 17:10:06.051760 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-08-29 17:10:06.051770 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-08-29 17:10:06.051781 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-08-29 17:10:06.051792 | orchestrator | 2025-08-29 17:10:06.051803 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-08-29 17:10:06.051814 | orchestrator | 2025-08-29 17:10:06.051825 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-08-29 17:10:06.051836 | orchestrator | Friday 29 August 2025 17:09:45 +0000 (0:00:00.807) 0:00:00.965 ********* 2025-08-29 17:10:06.051847 | orchestrator | ok: [testbed-manager] 2025-08-29 17:10:06.051859 | orchestrator | 2025-08-29 17:10:06.051870 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-08-29 17:10:06.051881 | orchestrator | 2025-08-29 17:10:06.051892 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-08-29 17:10:06.051904 | orchestrator | Friday 29 August 2025 17:09:48 +0000 (0:00:02.532) 0:00:03.498 ********* 2025-08-29 17:10:06.051923 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:10:06.051934 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:10:06.051945 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:10:06.051955 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:10:06.051966 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:10:06.052008 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:10:06.052020 | orchestrator | 2025-08-29 17:10:06.052031 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-08-29 17:10:06.052041 | orchestrator | 2025-08-29 17:10:06.052052 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-08-29 17:10:06.052064 | orchestrator | Friday 29 August 2025 17:09:49 +0000 (0:00:01.753) 0:00:05.251 ********* 2025-08-29 17:10:06.052076 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 17:10:06.052088 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 17:10:06.052102 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 17:10:06.052115 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 17:10:06.052127 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 17:10:06.052158 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 17:10:06.052171 | orchestrator | 2025-08-29 17:10:06.052185 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-08-29 17:10:06.052198 | orchestrator | Friday 29 August 2025 17:09:51 +0000 (0:00:01.538) 0:00:06.790 ********* 2025-08-29 17:10:06.052210 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:10:06.052223 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:10:06.052235 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:10:06.052248 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:10:06.052260 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:10:06.052273 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:10:06.052286 | orchestrator | 2025-08-29 17:10:06.052299 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-08-29 17:10:06.052312 | orchestrator | Friday 29 August 2025 17:09:55 +0000 (0:00:03.681) 0:00:10.472 ********* 2025-08-29 17:10:06.052324 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:10:06.052336 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:10:06.052349 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:10:06.052361 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:10:06.052374 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:10:06.052386 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:10:06.052399 | orchestrator | 2025-08-29 17:10:06.052412 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-08-29 17:10:06.052424 | orchestrator | 2025-08-29 17:10:06.052437 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-08-29 17:10:06.052450 | orchestrator | Friday 29 August 2025 17:09:55 +0000 (0:00:00.789) 0:00:11.261 ********* 2025-08-29 17:10:06.052460 | orchestrator | changed: [testbed-manager] 2025-08-29 17:10:06.052471 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:10:06.052482 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:10:06.052493 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:10:06.052503 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:10:06.052514 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:10:06.052524 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:10:06.052535 | orchestrator | 2025-08-29 17:10:06.052546 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-08-29 17:10:06.052557 | orchestrator | Friday 29 August 2025 17:09:57 +0000 (0:00:01.646) 0:00:12.907 ********* 2025-08-29 17:10:06.052567 | orchestrator | changed: [testbed-manager] 2025-08-29 17:10:06.052578 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:10:06.052589 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:10:06.052600 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:10:06.052610 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:10:06.052621 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:10:06.052647 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:10:06.052659 | orchestrator | 2025-08-29 17:10:06.052670 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-08-29 17:10:06.052681 | orchestrator | Friday 29 August 2025 17:09:59 +0000 (0:00:01.645) 0:00:14.553 ********* 2025-08-29 17:10:06.052692 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:10:06.052702 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:10:06.052713 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:10:06.052724 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:10:06.052734 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:10:06.052745 | orchestrator | ok: [testbed-manager] 2025-08-29 17:10:06.052756 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:10:06.052766 | orchestrator | 2025-08-29 17:10:06.052777 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-08-29 17:10:06.052788 | orchestrator | Friday 29 August 2025 17:10:00 +0000 (0:00:01.472) 0:00:16.025 ********* 2025-08-29 17:10:06.052799 | orchestrator | changed: [testbed-manager] 2025-08-29 17:10:06.052810 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:10:06.052827 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:10:06.052838 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:10:06.052848 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:10:06.052859 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:10:06.052869 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:10:06.052880 | orchestrator | 2025-08-29 17:10:06.052891 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-08-29 17:10:06.052902 | orchestrator | Friday 29 August 2025 17:10:02 +0000 (0:00:01.805) 0:00:17.831 ********* 2025-08-29 17:10:06.052912 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:10:06.052923 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:10:06.052934 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:10:06.052944 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:10:06.052955 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:10:06.052966 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:10:06.052993 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:10:06.053005 | orchestrator | 2025-08-29 17:10:06.053016 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-08-29 17:10:06.053026 | orchestrator | 2025-08-29 17:10:06.053038 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-08-29 17:10:06.053049 | orchestrator | Friday 29 August 2025 17:10:03 +0000 (0:00:00.775) 0:00:18.607 ********* 2025-08-29 17:10:06.053060 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:10:06.053071 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:10:06.053081 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:10:06.053092 | orchestrator | ok: [testbed-manager] 2025-08-29 17:10:06.053103 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:10:06.053113 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:10:06.053124 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:10:06.053135 | orchestrator | 2025-08-29 17:10:06.053146 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:10:06.053158 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:10:06.053171 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:10:06.053181 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:10:06.053192 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:10:06.053203 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:10:06.053213 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:10:06.053224 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:10:06.053235 | orchestrator | 2025-08-29 17:10:06.053246 | orchestrator | 2025-08-29 17:10:06.053256 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:10:06.053267 | orchestrator | Friday 29 August 2025 17:10:06 +0000 (0:00:02.764) 0:00:21.371 ********* 2025-08-29 17:10:06.053278 | orchestrator | =============================================================================== 2025-08-29 17:10:06.053288 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.68s 2025-08-29 17:10:06.053299 | orchestrator | Install python3-docker -------------------------------------------------- 2.76s 2025-08-29 17:10:06.053310 | orchestrator | Apply netplan configuration --------------------------------------------- 2.53s 2025-08-29 17:10:06.053320 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.81s 2025-08-29 17:10:06.053338 | orchestrator | Apply netplan configuration --------------------------------------------- 1.75s 2025-08-29 17:10:06.053348 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.65s 2025-08-29 17:10:06.053359 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.65s 2025-08-29 17:10:06.053370 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.54s 2025-08-29 17:10:06.053380 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.47s 2025-08-29 17:10:06.053391 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.81s 2025-08-29 17:10:06.053402 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.79s 2025-08-29 17:10:06.053419 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.78s 2025-08-29 17:10:06.754148 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-08-29 17:10:18.818875 | orchestrator | 2025-08-29 17:10:18 | INFO  | Task 8a90776b-24f3-414c-a4fc-3ea08e479392 (reboot) was prepared for execution. 2025-08-29 17:10:18.818976 | orchestrator | 2025-08-29 17:10:18 | INFO  | It takes a moment until task 8a90776b-24f3-414c-a4fc-3ea08e479392 (reboot) has been started and output is visible here. 2025-08-29 17:10:29.369643 | orchestrator | 2025-08-29 17:10:29.369758 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 17:10:29.369776 | orchestrator | 2025-08-29 17:10:29.369807 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 17:10:29.369820 | orchestrator | Friday 29 August 2025 17:10:23 +0000 (0:00:00.225) 0:00:00.225 ********* 2025-08-29 17:10:29.369832 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:10:29.369844 | orchestrator | 2025-08-29 17:10:29.369855 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 17:10:29.369867 | orchestrator | Friday 29 August 2025 17:10:23 +0000 (0:00:00.108) 0:00:00.334 ********* 2025-08-29 17:10:29.369878 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:10:29.369889 | orchestrator | 2025-08-29 17:10:29.369900 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 17:10:29.369911 | orchestrator | Friday 29 August 2025 17:10:24 +0000 (0:00:00.992) 0:00:01.326 ********* 2025-08-29 17:10:29.369922 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:10:29.369934 | orchestrator | 2025-08-29 17:10:29.369945 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 17:10:29.369956 | orchestrator | 2025-08-29 17:10:29.369967 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 17:10:29.369984 | orchestrator | Friday 29 August 2025 17:10:24 +0000 (0:00:00.124) 0:00:01.451 ********* 2025-08-29 17:10:29.370096 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:10:29.370108 | orchestrator | 2025-08-29 17:10:29.370119 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 17:10:29.370131 | orchestrator | Friday 29 August 2025 17:10:24 +0000 (0:00:00.104) 0:00:01.555 ********* 2025-08-29 17:10:29.370141 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:10:29.370152 | orchestrator | 2025-08-29 17:10:29.370163 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 17:10:29.370174 | orchestrator | Friday 29 August 2025 17:10:25 +0000 (0:00:00.652) 0:00:02.208 ********* 2025-08-29 17:10:29.370185 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:10:29.370197 | orchestrator | 2025-08-29 17:10:29.370209 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 17:10:29.370222 | orchestrator | 2025-08-29 17:10:29.370233 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 17:10:29.370245 | orchestrator | Friday 29 August 2025 17:10:25 +0000 (0:00:00.140) 0:00:02.349 ********* 2025-08-29 17:10:29.370257 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:10:29.370269 | orchestrator | 2025-08-29 17:10:29.370281 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 17:10:29.370315 | orchestrator | Friday 29 August 2025 17:10:25 +0000 (0:00:00.252) 0:00:02.601 ********* 2025-08-29 17:10:29.370328 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:10:29.370340 | orchestrator | 2025-08-29 17:10:29.370352 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 17:10:29.370365 | orchestrator | Friday 29 August 2025 17:10:26 +0000 (0:00:00.660) 0:00:03.262 ********* 2025-08-29 17:10:29.370377 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:10:29.370388 | orchestrator | 2025-08-29 17:10:29.370400 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 17:10:29.370412 | orchestrator | 2025-08-29 17:10:29.370424 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 17:10:29.370436 | orchestrator | Friday 29 August 2025 17:10:26 +0000 (0:00:00.137) 0:00:03.400 ********* 2025-08-29 17:10:29.370448 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:10:29.370460 | orchestrator | 2025-08-29 17:10:29.370472 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 17:10:29.370485 | orchestrator | Friday 29 August 2025 17:10:26 +0000 (0:00:00.123) 0:00:03.524 ********* 2025-08-29 17:10:29.370497 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:10:29.370509 | orchestrator | 2025-08-29 17:10:29.370521 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 17:10:29.370533 | orchestrator | Friday 29 August 2025 17:10:27 +0000 (0:00:00.658) 0:00:04.182 ********* 2025-08-29 17:10:29.370545 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:10:29.370557 | orchestrator | 2025-08-29 17:10:29.370569 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 17:10:29.370579 | orchestrator | 2025-08-29 17:10:29.370590 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 17:10:29.370601 | orchestrator | Friday 29 August 2025 17:10:27 +0000 (0:00:00.144) 0:00:04.327 ********* 2025-08-29 17:10:29.370611 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:10:29.370622 | orchestrator | 2025-08-29 17:10:29.370632 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 17:10:29.370643 | orchestrator | Friday 29 August 2025 17:10:27 +0000 (0:00:00.114) 0:00:04.441 ********* 2025-08-29 17:10:29.370654 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:10:29.370664 | orchestrator | 2025-08-29 17:10:29.370675 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 17:10:29.370686 | orchestrator | Friday 29 August 2025 17:10:27 +0000 (0:00:00.678) 0:00:05.120 ********* 2025-08-29 17:10:29.370696 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:10:29.370707 | orchestrator | 2025-08-29 17:10:29.370718 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 17:10:29.370728 | orchestrator | 2025-08-29 17:10:29.370739 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 17:10:29.370750 | orchestrator | Friday 29 August 2025 17:10:28 +0000 (0:00:00.124) 0:00:05.244 ********* 2025-08-29 17:10:29.370760 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:10:29.370771 | orchestrator | 2025-08-29 17:10:29.370782 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 17:10:29.370792 | orchestrator | Friday 29 August 2025 17:10:28 +0000 (0:00:00.114) 0:00:05.358 ********* 2025-08-29 17:10:29.370803 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:10:29.370814 | orchestrator | 2025-08-29 17:10:29.370824 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 17:10:29.370835 | orchestrator | Friday 29 August 2025 17:10:28 +0000 (0:00:00.700) 0:00:06.059 ********* 2025-08-29 17:10:29.370863 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:10:29.370875 | orchestrator | 2025-08-29 17:10:29.370886 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:10:29.370898 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:10:29.370917 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:10:29.370929 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:10:29.370940 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:10:29.370956 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:10:29.370968 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:10:29.370978 | orchestrator | 2025-08-29 17:10:29.371010 | orchestrator | 2025-08-29 17:10:29.371021 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:10:29.371032 | orchestrator | Friday 29 August 2025 17:10:28 +0000 (0:00:00.045) 0:00:06.104 ********* 2025-08-29 17:10:29.371042 | orchestrator | =============================================================================== 2025-08-29 17:10:29.371053 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.34s 2025-08-29 17:10:29.371064 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.82s 2025-08-29 17:10:29.371075 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.72s 2025-08-29 17:10:29.751805 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-08-29 17:10:42.093544 | orchestrator | 2025-08-29 17:10:42 | INFO  | Task 990a3506-a3f5-46ed-8979-a57d10b098b0 (wait-for-connection) was prepared for execution. 2025-08-29 17:10:42.093695 | orchestrator | 2025-08-29 17:10:42 | INFO  | It takes a moment until task 990a3506-a3f5-46ed-8979-a57d10b098b0 (wait-for-connection) has been started and output is visible here. 2025-08-29 17:10:58.881660 | orchestrator | 2025-08-29 17:10:58.881774 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-08-29 17:10:58.881790 | orchestrator | 2025-08-29 17:10:58.881803 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-08-29 17:10:58.881814 | orchestrator | Friday 29 August 2025 17:10:46 +0000 (0:00:00.296) 0:00:00.296 ********* 2025-08-29 17:10:58.881825 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:10:58.881837 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:10:58.881848 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:10:58.881859 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:10:58.881870 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:10:58.881880 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:10:58.881891 | orchestrator | 2025-08-29 17:10:58.881902 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:10:58.881913 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:10:58.881926 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:10:58.881937 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:10:58.881948 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:10:58.881958 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:10:58.881969 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:10:58.882005 | orchestrator | 2025-08-29 17:10:58.882089 | orchestrator | 2025-08-29 17:10:58.882104 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:10:58.882115 | orchestrator | Friday 29 August 2025 17:10:58 +0000 (0:00:11.693) 0:00:11.990 ********* 2025-08-29 17:10:58.882126 | orchestrator | =============================================================================== 2025-08-29 17:10:58.882137 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.69s 2025-08-29 17:10:59.238300 | orchestrator | + osism apply hddtemp 2025-08-29 17:11:11.465666 | orchestrator | 2025-08-29 17:11:11 | INFO  | Task cc14c0cf-8c4d-490d-a2db-f7594ed47913 (hddtemp) was prepared for execution. 2025-08-29 17:11:11.465779 | orchestrator | 2025-08-29 17:11:11 | INFO  | It takes a moment until task cc14c0cf-8c4d-490d-a2db-f7594ed47913 (hddtemp) has been started and output is visible here. 2025-08-29 17:11:39.806177 | orchestrator | 2025-08-29 17:11:39.806298 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-08-29 17:11:39.806314 | orchestrator | 2025-08-29 17:11:39.806328 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-08-29 17:11:39.806340 | orchestrator | Friday 29 August 2025 17:11:15 +0000 (0:00:00.295) 0:00:00.295 ********* 2025-08-29 17:11:39.806353 | orchestrator | ok: [testbed-manager] 2025-08-29 17:11:39.806365 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:11:39.806377 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:11:39.806388 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:11:39.806400 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:11:39.806411 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:11:39.806423 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:11:39.806434 | orchestrator | 2025-08-29 17:11:39.806446 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-08-29 17:11:39.806458 | orchestrator | Friday 29 August 2025 17:11:16 +0000 (0:00:00.782) 0:00:01.077 ********* 2025-08-29 17:11:39.806471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:11:39.806486 | orchestrator | 2025-08-29 17:11:39.806498 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-08-29 17:11:39.806526 | orchestrator | Friday 29 August 2025 17:11:18 +0000 (0:00:01.354) 0:00:02.432 ********* 2025-08-29 17:11:39.806539 | orchestrator | ok: [testbed-manager] 2025-08-29 17:11:39.806550 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:11:39.806562 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:11:39.806573 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:11:39.806584 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:11:39.806595 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:11:39.806607 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:11:39.806618 | orchestrator | 2025-08-29 17:11:39.806630 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-08-29 17:11:39.806641 | orchestrator | Friday 29 August 2025 17:11:20 +0000 (0:00:02.084) 0:00:04.516 ********* 2025-08-29 17:11:39.806654 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:11:39.806668 | orchestrator | changed: [testbed-manager] 2025-08-29 17:11:39.806680 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:11:39.806693 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:11:39.806705 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:11:39.806718 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:11:39.806730 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:11:39.806742 | orchestrator | 2025-08-29 17:11:39.806755 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-08-29 17:11:39.806768 | orchestrator | Friday 29 August 2025 17:11:21 +0000 (0:00:01.231) 0:00:05.748 ********* 2025-08-29 17:11:39.806780 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:11:39.806793 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:11:39.806827 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:11:39.806840 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:11:39.806852 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:11:39.806864 | orchestrator | ok: [testbed-manager] 2025-08-29 17:11:39.806877 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:11:39.806889 | orchestrator | 2025-08-29 17:11:39.806902 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-08-29 17:11:39.806914 | orchestrator | Friday 29 August 2025 17:11:22 +0000 (0:00:01.191) 0:00:06.940 ********* 2025-08-29 17:11:39.806927 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:11:39.806940 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:11:39.806952 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:11:39.806965 | orchestrator | changed: [testbed-manager] 2025-08-29 17:11:39.806978 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:11:39.806991 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:11:39.807004 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:11:39.807015 | orchestrator | 2025-08-29 17:11:39.807027 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-08-29 17:11:39.807038 | orchestrator | Friday 29 August 2025 17:11:23 +0000 (0:00:00.933) 0:00:07.874 ********* 2025-08-29 17:11:39.807071 | orchestrator | changed: [testbed-manager] 2025-08-29 17:11:39.807082 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:11:39.807092 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:11:39.807103 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:11:39.807114 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:11:39.807124 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:11:39.807134 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:11:39.807145 | orchestrator | 2025-08-29 17:11:39.807156 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-08-29 17:11:39.807180 | orchestrator | Friday 29 August 2025 17:11:35 +0000 (0:00:11.734) 0:00:19.608 ********* 2025-08-29 17:11:39.807191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:11:39.807202 | orchestrator | 2025-08-29 17:11:39.807213 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-08-29 17:11:39.807224 | orchestrator | Friday 29 August 2025 17:11:36 +0000 (0:00:01.462) 0:00:21.070 ********* 2025-08-29 17:11:39.807235 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:11:39.807245 | orchestrator | changed: [testbed-manager] 2025-08-29 17:11:39.807256 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:11:39.807266 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:11:39.807277 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:11:39.807287 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:11:39.807298 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:11:39.807309 | orchestrator | 2025-08-29 17:11:39.807319 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:11:39.807330 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:11:39.807359 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:11:39.807371 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:11:39.807382 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:11:39.807393 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:11:39.807404 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:11:39.807423 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:11:39.807434 | orchestrator | 2025-08-29 17:11:39.807446 | orchestrator | 2025-08-29 17:11:39.807457 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:11:39.807473 | orchestrator | Friday 29 August 2025 17:11:39 +0000 (0:00:02.721) 0:00:23.792 ********* 2025-08-29 17:11:39.807485 | orchestrator | =============================================================================== 2025-08-29 17:11:39.807495 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.73s 2025-08-29 17:11:39.807506 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.72s 2025-08-29 17:11:39.807517 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.08s 2025-08-29 17:11:39.807528 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.46s 2025-08-29 17:11:39.807539 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.35s 2025-08-29 17:11:39.807550 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.23s 2025-08-29 17:11:39.807561 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.19s 2025-08-29 17:11:39.807571 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.93s 2025-08-29 17:11:39.807582 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.78s 2025-08-29 17:11:40.167177 | orchestrator | ++ semver latest 7.1.1 2025-08-29 17:11:40.221125 | orchestrator | + [[ -1 -ge 0 ]] 2025-08-29 17:11:40.221180 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 17:11:40.221193 | orchestrator | + sudo systemctl restart manager.service 2025-08-29 17:11:54.163350 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 17:11:54.163449 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-08-29 17:11:54.163463 | orchestrator | + local max_attempts=60 2025-08-29 17:11:54.163474 | orchestrator | + local name=ceph-ansible 2025-08-29 17:11:54.163484 | orchestrator | + local attempt_num=1 2025-08-29 17:11:54.163495 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:11:54.207765 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:11:54.207790 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:11:54.207800 | orchestrator | + sleep 5 2025-08-29 17:11:59.215251 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:11:59.284302 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:11:59.284399 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:11:59.284415 | orchestrator | + sleep 5 2025-08-29 17:12:04.288262 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:12:04.324626 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:12:04.324729 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:12:04.324746 | orchestrator | + sleep 5 2025-08-29 17:12:09.327112 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:12:09.368876 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:12:09.369030 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:12:09.369057 | orchestrator | + sleep 5 2025-08-29 17:12:14.373607 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:12:14.418639 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:12:14.418719 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:12:14.418734 | orchestrator | + sleep 5 2025-08-29 17:12:19.423673 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:12:19.467929 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:12:19.468018 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:12:19.468033 | orchestrator | + sleep 5 2025-08-29 17:12:24.473311 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:12:24.514952 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:12:24.515026 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:12:24.515069 | orchestrator | + sleep 5 2025-08-29 17:12:29.520095 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:12:29.571373 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 17:12:29.571447 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:12:29.571461 | orchestrator | + sleep 5 2025-08-29 17:12:34.574875 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:12:34.734907 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 17:12:34.735007 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:12:34.735024 | orchestrator | + sleep 5 2025-08-29 17:12:39.738667 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:12:39.782005 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 17:12:39.782178 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:12:39.782195 | orchestrator | + sleep 5 2025-08-29 17:12:44.786746 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:12:44.831732 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 17:12:44.831828 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:12:44.831842 | orchestrator | + sleep 5 2025-08-29 17:12:49.837408 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:12:49.868273 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 17:12:49.868347 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:12:49.868361 | orchestrator | + sleep 5 2025-08-29 17:12:54.874340 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:12:54.909405 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 17:12:54.910076 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:12:54.910130 | orchestrator | + sleep 5 2025-08-29 17:12:59.913815 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:12:59.950323 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:12:59.950405 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-08-29 17:12:59.950419 | orchestrator | + local max_attempts=60 2025-08-29 17:12:59.950432 | orchestrator | + local name=kolla-ansible 2025-08-29 17:12:59.950443 | orchestrator | + local attempt_num=1 2025-08-29 17:12:59.951392 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-08-29 17:12:59.993293 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:12:59.993363 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-08-29 17:12:59.993377 | orchestrator | + local max_attempts=60 2025-08-29 17:12:59.993390 | orchestrator | + local name=osism-ansible 2025-08-29 17:12:59.993401 | orchestrator | + local attempt_num=1 2025-08-29 17:12:59.993929 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-08-29 17:13:00.036217 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:13:00.036280 | orchestrator | + [[ true == \t\r\u\e ]] 2025-08-29 17:13:00.036295 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-08-29 17:13:00.237866 | orchestrator | ARA in ceph-ansible already disabled. 2025-08-29 17:13:00.421691 | orchestrator | ARA in kolla-ansible already disabled. 2025-08-29 17:13:00.582280 | orchestrator | ARA in osism-ansible already disabled. 2025-08-29 17:13:00.753464 | orchestrator | ARA in osism-kubernetes already disabled. 2025-08-29 17:13:00.755461 | orchestrator | + osism apply gather-facts 2025-08-29 17:13:12.994452 | orchestrator | 2025-08-29 17:13:12 | INFO  | Task 17a6d5a0-18df-4a81-9e91-974e432ef05c (gather-facts) was prepared for execution. 2025-08-29 17:13:12.994549 | orchestrator | 2025-08-29 17:13:12 | INFO  | It takes a moment until task 17a6d5a0-18df-4a81-9e91-974e432ef05c (gather-facts) has been started and output is visible here. 2025-08-29 17:13:27.350222 | orchestrator | 2025-08-29 17:13:27.350321 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 17:13:27.350338 | orchestrator | 2025-08-29 17:13:27.350350 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 17:13:27.350361 | orchestrator | Friday 29 August 2025 17:13:17 +0000 (0:00:00.227) 0:00:00.227 ********* 2025-08-29 17:13:27.350373 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:13:27.350384 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:13:27.350395 | orchestrator | ok: [testbed-manager] 2025-08-29 17:13:27.350406 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:13:27.350447 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:13:27.350463 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:13:27.350474 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:13:27.350484 | orchestrator | 2025-08-29 17:13:27.350496 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 17:13:27.350506 | orchestrator | 2025-08-29 17:13:27.350517 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 17:13:27.350528 | orchestrator | Friday 29 August 2025 17:13:26 +0000 (0:00:09.271) 0:00:09.499 ********* 2025-08-29 17:13:27.350539 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:13:27.350550 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:13:27.350561 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:13:27.350572 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:13:27.350583 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:13:27.350594 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:13:27.350605 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:13:27.350615 | orchestrator | 2025-08-29 17:13:27.350626 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:13:27.350638 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:13:27.350649 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:13:27.350660 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:13:27.350671 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:13:27.350682 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:13:27.350693 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:13:27.350704 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:13:27.350715 | orchestrator | 2025-08-29 17:13:27.350726 | orchestrator | 2025-08-29 17:13:27.350737 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:13:27.350748 | orchestrator | Friday 29 August 2025 17:13:26 +0000 (0:00:00.561) 0:00:10.061 ********* 2025-08-29 17:13:27.350760 | orchestrator | =============================================================================== 2025-08-29 17:13:27.350773 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.27s 2025-08-29 17:13:27.350785 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-08-29 17:13:27.713328 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-08-29 17:13:27.725098 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-08-29 17:13:27.744162 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-08-29 17:13:27.759750 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-08-29 17:13:27.778000 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-08-29 17:13:27.798528 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-08-29 17:13:27.812916 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-08-29 17:13:27.828853 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-08-29 17:13:27.851291 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-08-29 17:13:27.867279 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-08-29 17:13:27.880428 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-08-29 17:13:27.892747 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-08-29 17:13:27.904966 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-08-29 17:13:27.924683 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-08-29 17:13:27.950822 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-08-29 17:13:27.966832 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-08-29 17:13:27.979352 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-08-29 17:13:27.991974 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-08-29 17:13:28.006591 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-08-29 17:13:28.023309 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-08-29 17:13:28.045569 | orchestrator | + [[ false == \t\r\u\e ]] 2025-08-29 17:13:28.375418 | orchestrator | ok: Runtime: 0:24:08.332408 2025-08-29 17:13:28.481963 | 2025-08-29 17:13:28.482100 | TASK [Deploy services] 2025-08-29 17:13:29.015743 | orchestrator | skipping: Conditional result was False 2025-08-29 17:13:29.035157 | 2025-08-29 17:13:29.035359 | TASK [Deploy in a nutshell] 2025-08-29 17:13:29.722811 | orchestrator | + set -e 2025-08-29 17:13:29.722919 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 17:13:29.723695 | orchestrator | 2025-08-29 17:13:29.723704 | orchestrator | # PULL IMAGES 2025-08-29 17:13:29.723709 | orchestrator | 2025-08-29 17:13:29.723717 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 17:13:29.723723 | orchestrator | ++ INTERACTIVE=false 2025-08-29 17:13:29.723748 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 17:13:29.723757 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 17:13:29.723763 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 17:13:29.723767 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 17:13:29.723774 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 17:13:29.723778 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 17:13:29.723785 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 17:13:29.723789 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 17:13:29.723797 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 17:13:29.723800 | orchestrator | ++ export MANAGER_VERSION=latest 2025-08-29 17:13:29.723806 | orchestrator | ++ MANAGER_VERSION=latest 2025-08-29 17:13:29.723810 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 17:13:29.723814 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 17:13:29.723818 | orchestrator | ++ export ARA=false 2025-08-29 17:13:29.723822 | orchestrator | ++ ARA=false 2025-08-29 17:13:29.723826 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 17:13:29.723829 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 17:13:29.723833 | orchestrator | ++ export TEMPEST=false 2025-08-29 17:13:29.723837 | orchestrator | ++ TEMPEST=false 2025-08-29 17:13:29.723841 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 17:13:29.723844 | orchestrator | ++ IS_ZUUL=true 2025-08-29 17:13:29.723848 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.184 2025-08-29 17:13:29.723852 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.184 2025-08-29 17:13:29.723856 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 17:13:29.723860 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 17:13:29.723863 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 17:13:29.723867 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 17:13:29.723871 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 17:13:29.723875 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 17:13:29.723878 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 17:13:29.723882 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 17:13:29.723886 | orchestrator | + echo 2025-08-29 17:13:29.723894 | orchestrator | + echo '# PULL IMAGES' 2025-08-29 17:13:29.723898 | orchestrator | + echo 2025-08-29 17:13:29.724265 | orchestrator | ++ semver latest 7.0.0 2025-08-29 17:13:29.777172 | orchestrator | + [[ -1 -ge 0 ]] 2025-08-29 17:13:29.777214 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 17:13:29.777219 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-08-29 17:13:31.740162 | orchestrator | 2025-08-29 17:13:31 | INFO  | Trying to run play pull-images in environment custom 2025-08-29 17:13:41.959808 | orchestrator | 2025-08-29 17:13:41 | INFO  | Task 50a4f170-dfb7-4150-99af-fb5f85841505 (pull-images) was prepared for execution. 2025-08-29 17:13:41.959904 | orchestrator | 2025-08-29 17:13:41 | INFO  | Task 50a4f170-dfb7-4150-99af-fb5f85841505 is running in background. No more output. Check ARA for logs. 2025-08-29 17:13:44.521648 | orchestrator | 2025-08-29 17:13:44 | INFO  | Trying to run play wipe-partitions in environment custom 2025-08-29 17:13:54.691094 | orchestrator | 2025-08-29 17:13:54 | INFO  | Task 1261a776-2a78-4f75-b5b2-ef0ef73d2e5f (wipe-partitions) was prepared for execution. 2025-08-29 17:13:54.691221 | orchestrator | 2025-08-29 17:13:54 | INFO  | It takes a moment until task 1261a776-2a78-4f75-b5b2-ef0ef73d2e5f (wipe-partitions) has been started and output is visible here. 2025-08-29 17:14:08.775620 | orchestrator | 2025-08-29 17:14:08.775744 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-08-29 17:14:08.775762 | orchestrator | 2025-08-29 17:14:08.775774 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-08-29 17:14:08.775791 | orchestrator | Friday 29 August 2025 17:13:59 +0000 (0:00:00.135) 0:00:00.135 ********* 2025-08-29 17:14:08.775803 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:14:08.775814 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:14:08.775826 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:14:08.775837 | orchestrator | 2025-08-29 17:14:08.775849 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-08-29 17:14:08.775888 | orchestrator | Friday 29 August 2025 17:13:59 +0000 (0:00:00.574) 0:00:00.709 ********* 2025-08-29 17:14:08.775900 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:08.775910 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:14:08.775925 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:14:08.775936 | orchestrator | 2025-08-29 17:14:08.775947 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-08-29 17:14:08.775958 | orchestrator | Friday 29 August 2025 17:14:00 +0000 (0:00:00.283) 0:00:00.993 ********* 2025-08-29 17:14:08.775969 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:14:08.775981 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:14:08.775991 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:14:08.776002 | orchestrator | 2025-08-29 17:14:08.776013 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-08-29 17:14:08.776023 | orchestrator | Friday 29 August 2025 17:14:01 +0000 (0:00:00.745) 0:00:01.738 ********* 2025-08-29 17:14:08.776035 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:08.776045 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:14:08.776056 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:14:08.776066 | orchestrator | 2025-08-29 17:14:08.776077 | orchestrator | TASK [Check device availability] *********************************************** 2025-08-29 17:14:08.776088 | orchestrator | Friday 29 August 2025 17:14:01 +0000 (0:00:00.321) 0:00:02.060 ********* 2025-08-29 17:14:08.776099 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 17:14:08.776114 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 17:14:08.776125 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 17:14:08.776136 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 17:14:08.776149 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 17:14:08.776161 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 17:14:08.776212 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 17:14:08.776226 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 17:14:08.776238 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 17:14:08.776250 | orchestrator | 2025-08-29 17:14:08.776262 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-08-29 17:14:08.776276 | orchestrator | Friday 29 August 2025 17:14:03 +0000 (0:00:02.190) 0:00:04.251 ********* 2025-08-29 17:14:08.776289 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 17:14:08.776302 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 17:14:08.776314 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 17:14:08.776326 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 17:14:08.776338 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 17:14:08.776351 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 17:14:08.776364 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 17:14:08.776377 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 17:14:08.776389 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 17:14:08.776402 | orchestrator | 2025-08-29 17:14:08.776414 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-08-29 17:14:08.776426 | orchestrator | Friday 29 August 2025 17:14:04 +0000 (0:00:01.323) 0:00:05.574 ********* 2025-08-29 17:14:08.776438 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 17:14:08.776451 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 17:14:08.776463 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 17:14:08.776475 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 17:14:08.776488 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 17:14:08.776500 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 17:14:08.776511 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 17:14:08.776531 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 17:14:08.776549 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 17:14:08.776560 | orchestrator | 2025-08-29 17:14:08.776571 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-08-29 17:14:08.776582 | orchestrator | Friday 29 August 2025 17:14:07 +0000 (0:00:02.330) 0:00:07.904 ********* 2025-08-29 17:14:08.776593 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:14:08.776604 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:14:08.776615 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:14:08.776626 | orchestrator | 2025-08-29 17:14:08.776637 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-08-29 17:14:08.776648 | orchestrator | Friday 29 August 2025 17:14:07 +0000 (0:00:00.608) 0:00:08.513 ********* 2025-08-29 17:14:08.776659 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:14:08.776669 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:14:08.776681 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:14:08.776692 | orchestrator | 2025-08-29 17:14:08.776703 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:14:08.776716 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:14:08.776729 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:14:08.776757 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:14:08.776769 | orchestrator | 2025-08-29 17:14:08.776780 | orchestrator | 2025-08-29 17:14:08.776791 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:14:08.776802 | orchestrator | Friday 29 August 2025 17:14:08 +0000 (0:00:00.623) 0:00:09.137 ********* 2025-08-29 17:14:08.776813 | orchestrator | =============================================================================== 2025-08-29 17:14:08.776824 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.33s 2025-08-29 17:14:08.776835 | orchestrator | Check device availability ----------------------------------------------- 2.19s 2025-08-29 17:14:08.776846 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.32s 2025-08-29 17:14:08.776857 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.75s 2025-08-29 17:14:08.776868 | orchestrator | Request device events from the kernel ----------------------------------- 0.62s 2025-08-29 17:14:08.776879 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2025-08-29 17:14:08.776890 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2025-08-29 17:14:08.776901 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.32s 2025-08-29 17:14:08.776912 | orchestrator | Remove all rook related logical devices --------------------------------- 0.28s 2025-08-29 17:14:21.180214 | orchestrator | 2025-08-29 17:14:21 | INFO  | Task c08ed99e-4bf1-41cb-8601-527c7609f19a (facts) was prepared for execution. 2025-08-29 17:14:21.180379 | orchestrator | 2025-08-29 17:14:21 | INFO  | It takes a moment until task c08ed99e-4bf1-41cb-8601-527c7609f19a (facts) has been started and output is visible here. 2025-08-29 17:14:33.893757 | orchestrator | 2025-08-29 17:14:33.893865 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 17:14:33.893883 | orchestrator | 2025-08-29 17:14:33.893896 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 17:14:33.893909 | orchestrator | Friday 29 August 2025 17:14:25 +0000 (0:00:00.293) 0:00:00.293 ********* 2025-08-29 17:14:33.893920 | orchestrator | ok: [testbed-manager] 2025-08-29 17:14:33.893932 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:14:33.893943 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:14:33.893978 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:14:33.893989 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:14:33.894000 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:14:33.894010 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:14:33.894085 | orchestrator | 2025-08-29 17:14:33.894097 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 17:14:33.894108 | orchestrator | Friday 29 August 2025 17:14:26 +0000 (0:00:01.230) 0:00:01.523 ********* 2025-08-29 17:14:33.894119 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:14:33.894130 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:14:33.894141 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:14:33.894152 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:14:33.894162 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:33.894173 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:14:33.894183 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:14:33.894216 | orchestrator | 2025-08-29 17:14:33.894227 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 17:14:33.894238 | orchestrator | 2025-08-29 17:14:33.894264 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 17:14:33.894276 | orchestrator | Friday 29 August 2025 17:14:28 +0000 (0:00:01.324) 0:00:02.848 ********* 2025-08-29 17:14:33.894287 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:14:33.894297 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:14:33.894309 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:14:33.894321 | orchestrator | ok: [testbed-manager] 2025-08-29 17:14:33.894332 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:14:33.894344 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:14:33.894355 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:14:33.894367 | orchestrator | 2025-08-29 17:14:33.894379 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 17:14:33.894391 | orchestrator | 2025-08-29 17:14:33.894403 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 17:14:33.894416 | orchestrator | Friday 29 August 2025 17:14:32 +0000 (0:00:04.600) 0:00:07.448 ********* 2025-08-29 17:14:33.894427 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:14:33.894439 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:14:33.894450 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:14:33.894462 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:14:33.894473 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:33.894485 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:14:33.894497 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:14:33.894509 | orchestrator | 2025-08-29 17:14:33.894521 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:14:33.894533 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:14:33.894547 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:14:33.894559 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:14:33.894600 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:14:33.894612 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:14:33.894625 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:14:33.894637 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:14:33.894649 | orchestrator | 2025-08-29 17:14:33.894673 | orchestrator | 2025-08-29 17:14:33.894686 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:14:33.894699 | orchestrator | Friday 29 August 2025 17:14:33 +0000 (0:00:00.720) 0:00:08.168 ********* 2025-08-29 17:14:33.894711 | orchestrator | =============================================================================== 2025-08-29 17:14:33.894722 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.60s 2025-08-29 17:14:33.894734 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.32s 2025-08-29 17:14:33.894745 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.23s 2025-08-29 17:14:33.894756 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.72s 2025-08-29 17:14:36.396464 | orchestrator | 2025-08-29 17:14:36 | INFO  | Task 225c5bd6-cc9f-4741-86cf-563e2b5b8bb0 (ceph-configure-lvm-volumes) was prepared for execution. 2025-08-29 17:14:36.396549 | orchestrator | 2025-08-29 17:14:36 | INFO  | It takes a moment until task 225c5bd6-cc9f-4741-86cf-563e2b5b8bb0 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-08-29 17:14:48.854333 | orchestrator | 2025-08-29 17:14:48.854443 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 17:14:48.854460 | orchestrator | 2025-08-29 17:14:48.854473 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 17:14:48.854485 | orchestrator | Friday 29 August 2025 17:14:40 +0000 (0:00:00.393) 0:00:00.393 ********* 2025-08-29 17:14:48.854497 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 17:14:48.854508 | orchestrator | 2025-08-29 17:14:48.854519 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 17:14:48.854530 | orchestrator | Friday 29 August 2025 17:14:41 +0000 (0:00:00.236) 0:00:00.630 ********* 2025-08-29 17:14:48.854541 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:14:48.854553 | orchestrator | 2025-08-29 17:14:48.854564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:14:48.854575 | orchestrator | Friday 29 August 2025 17:14:41 +0000 (0:00:00.257) 0:00:00.888 ********* 2025-08-29 17:14:48.854586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-08-29 17:14:48.854597 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-08-29 17:14:48.854608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-08-29 17:14:48.854630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-08-29 17:14:48.854642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-08-29 17:14:48.854653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-08-29 17:14:48.854664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-08-29 17:14:48.854674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-08-29 17:14:48.854685 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-08-29 17:14:48.854696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-08-29 17:14:48.854707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-08-29 17:14:48.854717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-08-29 17:14:48.854728 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-08-29 17:14:48.854739 | orchestrator | 2025-08-29 17:14:48.854750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:14:48.854761 | orchestrator | Friday 29 August 2025 17:14:41 +0000 (0:00:00.364) 0:00:01.252 ********* 2025-08-29 17:14:48.854772 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:48.854804 | orchestrator | 2025-08-29 17:14:48.854816 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:14:48.854828 | orchestrator | Friday 29 August 2025 17:14:42 +0000 (0:00:00.513) 0:00:01.765 ********* 2025-08-29 17:14:48.854847 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:48.854865 | orchestrator | 2025-08-29 17:14:48.854883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:14:48.854900 | orchestrator | Friday 29 August 2025 17:14:42 +0000 (0:00:00.232) 0:00:01.998 ********* 2025-08-29 17:14:48.854928 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:48.854948 | orchestrator | 2025-08-29 17:14:48.854965 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:14:48.854983 | orchestrator | Friday 29 August 2025 17:14:42 +0000 (0:00:00.202) 0:00:02.201 ********* 2025-08-29 17:14:48.855000 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:48.855020 | orchestrator | 2025-08-29 17:14:48.855037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:14:48.855054 | orchestrator | Friday 29 August 2025 17:14:42 +0000 (0:00:00.202) 0:00:02.404 ********* 2025-08-29 17:14:48.855072 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:48.855090 | orchestrator | 2025-08-29 17:14:48.855109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:14:48.855127 | orchestrator | Friday 29 August 2025 17:14:43 +0000 (0:00:00.199) 0:00:02.603 ********* 2025-08-29 17:14:48.855144 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:48.855160 | orchestrator | 2025-08-29 17:14:48.855176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:14:48.855192 | orchestrator | Friday 29 August 2025 17:14:43 +0000 (0:00:00.229) 0:00:02.833 ********* 2025-08-29 17:14:48.855258 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:48.855275 | orchestrator | 2025-08-29 17:14:48.855293 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:14:48.855310 | orchestrator | Friday 29 August 2025 17:14:43 +0000 (0:00:00.214) 0:00:03.048 ********* 2025-08-29 17:14:48.855327 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:48.855344 | orchestrator | 2025-08-29 17:14:48.855361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:14:48.855379 | orchestrator | Friday 29 August 2025 17:14:43 +0000 (0:00:00.223) 0:00:03.271 ********* 2025-08-29 17:14:48.855398 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585) 2025-08-29 17:14:48.855417 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585) 2025-08-29 17:14:48.855436 | orchestrator | 2025-08-29 17:14:48.855455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:14:48.855473 | orchestrator | Friday 29 August 2025 17:14:44 +0000 (0:00:00.414) 0:00:03.686 ********* 2025-08-29 17:14:48.855511 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_82706033-d7aa-4ff6-a1c3-b9e917369a8a) 2025-08-29 17:14:48.855524 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_82706033-d7aa-4ff6-a1c3-b9e917369a8a) 2025-08-29 17:14:48.855535 | orchestrator | 2025-08-29 17:14:48.855545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:14:48.855557 | orchestrator | Friday 29 August 2025 17:14:44 +0000 (0:00:00.414) 0:00:04.100 ********* 2025-08-29 17:14:48.855576 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2fca4086-decf-4125-8890-d41999d174b7) 2025-08-29 17:14:48.855588 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2fca4086-decf-4125-8890-d41999d174b7) 2025-08-29 17:14:48.855598 | orchestrator | 2025-08-29 17:14:48.855609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:14:48.855620 | orchestrator | Friday 29 August 2025 17:14:45 +0000 (0:00:00.649) 0:00:04.749 ********* 2025-08-29 17:14:48.855631 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_95e03a76-aedc-4db5-a6e6-17eb5f40fbcd) 2025-08-29 17:14:48.855655 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_95e03a76-aedc-4db5-a6e6-17eb5f40fbcd) 2025-08-29 17:14:48.855666 | orchestrator | 2025-08-29 17:14:48.855677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:14:48.855687 | orchestrator | Friday 29 August 2025 17:14:45 +0000 (0:00:00.697) 0:00:05.447 ********* 2025-08-29 17:14:48.855698 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 17:14:48.855709 | orchestrator | 2025-08-29 17:14:48.855719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:14:48.855730 | orchestrator | Friday 29 August 2025 17:14:46 +0000 (0:00:00.783) 0:00:06.231 ********* 2025-08-29 17:14:48.855741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-08-29 17:14:48.855751 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-08-29 17:14:48.855762 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-08-29 17:14:48.855773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-08-29 17:14:48.855783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-08-29 17:14:48.855794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-08-29 17:14:48.855804 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-08-29 17:14:48.855815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-08-29 17:14:48.855825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-08-29 17:14:48.855836 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-08-29 17:14:48.855846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-08-29 17:14:48.855857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-08-29 17:14:48.855873 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-08-29 17:14:48.855891 | orchestrator | 2025-08-29 17:14:48.855909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:14:48.855928 | orchestrator | Friday 29 August 2025 17:14:47 +0000 (0:00:00.419) 0:00:06.650 ********* 2025-08-29 17:14:48.855945 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:48.855963 | orchestrator | 2025-08-29 17:14:48.855981 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:14:48.855999 | orchestrator | Friday 29 August 2025 17:14:47 +0000 (0:00:00.202) 0:00:06.853 ********* 2025-08-29 17:14:48.856018 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:48.856037 | orchestrator | 2025-08-29 17:14:48.856055 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:14:48.856074 | orchestrator | Friday 29 August 2025 17:14:47 +0000 (0:00:00.206) 0:00:07.059 ********* 2025-08-29 17:14:48.856093 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:48.856111 | orchestrator | 2025-08-29 17:14:48.856130 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:14:48.856148 | orchestrator | Friday 29 August 2025 17:14:47 +0000 (0:00:00.223) 0:00:07.283 ********* 2025-08-29 17:14:48.856166 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:48.856184 | orchestrator | 2025-08-29 17:14:48.856227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:14:48.856247 | orchestrator | Friday 29 August 2025 17:14:48 +0000 (0:00:00.231) 0:00:07.515 ********* 2025-08-29 17:14:48.856265 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:48.856285 | orchestrator | 2025-08-29 17:14:48.856307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:14:48.856318 | orchestrator | Friday 29 August 2025 17:14:48 +0000 (0:00:00.214) 0:00:07.729 ********* 2025-08-29 17:14:48.856329 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:48.856340 | orchestrator | 2025-08-29 17:14:48.856350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:14:48.856361 | orchestrator | Friday 29 August 2025 17:14:48 +0000 (0:00:00.198) 0:00:07.928 ********* 2025-08-29 17:14:48.856372 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:48.856382 | orchestrator | 2025-08-29 17:14:48.856393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:14:48.856404 | orchestrator | Friday 29 August 2025 17:14:48 +0000 (0:00:00.198) 0:00:08.127 ********* 2025-08-29 17:14:48.856426 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.885933 | orchestrator | 2025-08-29 17:14:56.886074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:14:56.886094 | orchestrator | Friday 29 August 2025 17:14:48 +0000 (0:00:00.200) 0:00:08.328 ********* 2025-08-29 17:14:56.886106 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-08-29 17:14:56.886119 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-08-29 17:14:56.886130 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-08-29 17:14:56.886141 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-08-29 17:14:56.886153 | orchestrator | 2025-08-29 17:14:56.886164 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:14:56.886176 | orchestrator | Friday 29 August 2025 17:14:49 +0000 (0:00:01.139) 0:00:09.467 ********* 2025-08-29 17:14:56.886248 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.886261 | orchestrator | 2025-08-29 17:14:56.886272 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:14:56.886284 | orchestrator | Friday 29 August 2025 17:14:50 +0000 (0:00:00.241) 0:00:09.709 ********* 2025-08-29 17:14:56.886295 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.886306 | orchestrator | 2025-08-29 17:14:56.886317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:14:56.886328 | orchestrator | Friday 29 August 2025 17:14:50 +0000 (0:00:00.194) 0:00:09.903 ********* 2025-08-29 17:14:56.886339 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.886350 | orchestrator | 2025-08-29 17:14:56.886361 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:14:56.886372 | orchestrator | Friday 29 August 2025 17:14:50 +0000 (0:00:00.209) 0:00:10.113 ********* 2025-08-29 17:14:56.886383 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.886394 | orchestrator | 2025-08-29 17:14:56.886405 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 17:14:56.886416 | orchestrator | Friday 29 August 2025 17:14:50 +0000 (0:00:00.205) 0:00:10.318 ********* 2025-08-29 17:14:56.886427 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-08-29 17:14:56.886438 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-08-29 17:14:56.886449 | orchestrator | 2025-08-29 17:14:56.886462 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 17:14:56.886474 | orchestrator | Friday 29 August 2025 17:14:51 +0000 (0:00:00.180) 0:00:10.499 ********* 2025-08-29 17:14:56.886486 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.886498 | orchestrator | 2025-08-29 17:14:56.886511 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 17:14:56.886523 | orchestrator | Friday 29 August 2025 17:14:51 +0000 (0:00:00.132) 0:00:10.631 ********* 2025-08-29 17:14:56.886535 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.886547 | orchestrator | 2025-08-29 17:14:56.886559 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 17:14:56.886571 | orchestrator | Friday 29 August 2025 17:14:51 +0000 (0:00:00.140) 0:00:10.772 ********* 2025-08-29 17:14:56.886584 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.886620 | orchestrator | 2025-08-29 17:14:56.886632 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 17:14:56.886644 | orchestrator | Friday 29 August 2025 17:14:51 +0000 (0:00:00.140) 0:00:10.913 ********* 2025-08-29 17:14:56.886657 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:14:56.886669 | orchestrator | 2025-08-29 17:14:56.886682 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 17:14:56.886694 | orchestrator | Friday 29 August 2025 17:14:51 +0000 (0:00:00.149) 0:00:11.062 ********* 2025-08-29 17:14:56.886707 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b00dade2-f82b-53af-89a3-8c9250354ec6'}}) 2025-08-29 17:14:56.886720 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8088253a-7e26-529d-8fdb-0f472c9bb5d3'}}) 2025-08-29 17:14:56.886732 | orchestrator | 2025-08-29 17:14:56.886745 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 17:14:56.886758 | orchestrator | Friday 29 August 2025 17:14:51 +0000 (0:00:00.177) 0:00:11.240 ********* 2025-08-29 17:14:56.886771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b00dade2-f82b-53af-89a3-8c9250354ec6'}})  2025-08-29 17:14:56.886791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8088253a-7e26-529d-8fdb-0f472c9bb5d3'}})  2025-08-29 17:14:56.886804 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.886816 | orchestrator | 2025-08-29 17:14:56.886827 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 17:14:56.886838 | orchestrator | Friday 29 August 2025 17:14:51 +0000 (0:00:00.151) 0:00:11.391 ********* 2025-08-29 17:14:56.886849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b00dade2-f82b-53af-89a3-8c9250354ec6'}})  2025-08-29 17:14:56.886860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8088253a-7e26-529d-8fdb-0f472c9bb5d3'}})  2025-08-29 17:14:56.886871 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.886882 | orchestrator | 2025-08-29 17:14:56.886893 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 17:14:56.886903 | orchestrator | Friday 29 August 2025 17:14:52 +0000 (0:00:00.359) 0:00:11.751 ********* 2025-08-29 17:14:56.886914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b00dade2-f82b-53af-89a3-8c9250354ec6'}})  2025-08-29 17:14:56.886925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8088253a-7e26-529d-8fdb-0f472c9bb5d3'}})  2025-08-29 17:14:56.886936 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.886947 | orchestrator | 2025-08-29 17:14:56.886976 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 17:14:56.886987 | orchestrator | Friday 29 August 2025 17:14:52 +0000 (0:00:00.161) 0:00:11.912 ********* 2025-08-29 17:14:56.886998 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:14:56.887009 | orchestrator | 2025-08-29 17:14:56.887020 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 17:14:56.887057 | orchestrator | Friday 29 August 2025 17:14:52 +0000 (0:00:00.161) 0:00:12.073 ********* 2025-08-29 17:14:56.887069 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:14:56.887079 | orchestrator | 2025-08-29 17:14:56.887090 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 17:14:56.887101 | orchestrator | Friday 29 August 2025 17:14:52 +0000 (0:00:00.154) 0:00:12.227 ********* 2025-08-29 17:14:56.887112 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.887123 | orchestrator | 2025-08-29 17:14:56.887133 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 17:14:56.887144 | orchestrator | Friday 29 August 2025 17:14:52 +0000 (0:00:00.144) 0:00:12.372 ********* 2025-08-29 17:14:56.887155 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.887166 | orchestrator | 2025-08-29 17:14:56.887185 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 17:14:56.887196 | orchestrator | Friday 29 August 2025 17:14:53 +0000 (0:00:00.140) 0:00:12.513 ********* 2025-08-29 17:14:56.887226 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.887237 | orchestrator | 2025-08-29 17:14:56.887248 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 17:14:56.887258 | orchestrator | Friday 29 August 2025 17:14:53 +0000 (0:00:00.167) 0:00:12.680 ********* 2025-08-29 17:14:56.887269 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 17:14:56.887280 | orchestrator |  "ceph_osd_devices": { 2025-08-29 17:14:56.887291 | orchestrator |  "sdb": { 2025-08-29 17:14:56.887302 | orchestrator |  "osd_lvm_uuid": "b00dade2-f82b-53af-89a3-8c9250354ec6" 2025-08-29 17:14:56.887313 | orchestrator |  }, 2025-08-29 17:14:56.887325 | orchestrator |  "sdc": { 2025-08-29 17:14:56.887335 | orchestrator |  "osd_lvm_uuid": "8088253a-7e26-529d-8fdb-0f472c9bb5d3" 2025-08-29 17:14:56.887346 | orchestrator |  } 2025-08-29 17:14:56.887357 | orchestrator |  } 2025-08-29 17:14:56.887368 | orchestrator | } 2025-08-29 17:14:56.887379 | orchestrator | 2025-08-29 17:14:56.887390 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 17:14:56.887401 | orchestrator | Friday 29 August 2025 17:14:53 +0000 (0:00:00.152) 0:00:12.833 ********* 2025-08-29 17:14:56.887412 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.887422 | orchestrator | 2025-08-29 17:14:56.887433 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 17:14:56.887444 | orchestrator | Friday 29 August 2025 17:14:53 +0000 (0:00:00.131) 0:00:12.964 ********* 2025-08-29 17:14:56.887460 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.887471 | orchestrator | 2025-08-29 17:14:56.887482 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 17:14:56.887493 | orchestrator | Friday 29 August 2025 17:14:53 +0000 (0:00:00.152) 0:00:13.117 ********* 2025-08-29 17:14:56.887519 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:14:56.887531 | orchestrator | 2025-08-29 17:14:56.887542 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 17:14:56.887563 | orchestrator | Friday 29 August 2025 17:14:53 +0000 (0:00:00.143) 0:00:13.260 ********* 2025-08-29 17:14:56.887574 | orchestrator | changed: [testbed-node-3] => { 2025-08-29 17:14:56.887585 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 17:14:56.887596 | orchestrator |  "ceph_osd_devices": { 2025-08-29 17:14:56.887607 | orchestrator |  "sdb": { 2025-08-29 17:14:56.887618 | orchestrator |  "osd_lvm_uuid": "b00dade2-f82b-53af-89a3-8c9250354ec6" 2025-08-29 17:14:56.887629 | orchestrator |  }, 2025-08-29 17:14:56.887640 | orchestrator |  "sdc": { 2025-08-29 17:14:56.887651 | orchestrator |  "osd_lvm_uuid": "8088253a-7e26-529d-8fdb-0f472c9bb5d3" 2025-08-29 17:14:56.887662 | orchestrator |  } 2025-08-29 17:14:56.887673 | orchestrator |  }, 2025-08-29 17:14:56.887684 | orchestrator |  "lvm_volumes": [ 2025-08-29 17:14:56.887695 | orchestrator |  { 2025-08-29 17:14:56.887706 | orchestrator |  "data": "osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6", 2025-08-29 17:14:56.887717 | orchestrator |  "data_vg": "ceph-b00dade2-f82b-53af-89a3-8c9250354ec6" 2025-08-29 17:14:56.887728 | orchestrator |  }, 2025-08-29 17:14:56.887739 | orchestrator |  { 2025-08-29 17:14:56.887750 | orchestrator |  "data": "osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3", 2025-08-29 17:14:56.887761 | orchestrator |  "data_vg": "ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3" 2025-08-29 17:14:56.887772 | orchestrator |  } 2025-08-29 17:14:56.887782 | orchestrator |  ] 2025-08-29 17:14:56.887793 | orchestrator |  } 2025-08-29 17:14:56.887804 | orchestrator | } 2025-08-29 17:14:56.887815 | orchestrator | 2025-08-29 17:14:56.887826 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 17:14:56.887859 | orchestrator | Friday 29 August 2025 17:14:54 +0000 (0:00:00.240) 0:00:13.500 ********* 2025-08-29 17:14:56.887871 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 17:14:56.887882 | orchestrator | 2025-08-29 17:14:56.887892 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 17:14:56.887903 | orchestrator | 2025-08-29 17:14:56.887914 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 17:14:56.887925 | orchestrator | Friday 29 August 2025 17:14:56 +0000 (0:00:02.324) 0:00:15.825 ********* 2025-08-29 17:14:56.887936 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 17:14:56.887947 | orchestrator | 2025-08-29 17:14:56.887958 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 17:14:56.887969 | orchestrator | Friday 29 August 2025 17:14:56 +0000 (0:00:00.297) 0:00:16.123 ********* 2025-08-29 17:14:56.887993 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:14:56.888004 | orchestrator | 2025-08-29 17:14:56.888015 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:14:56.888034 | orchestrator | Friday 29 August 2025 17:14:56 +0000 (0:00:00.239) 0:00:16.363 ********* 2025-08-29 17:15:05.259425 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-08-29 17:15:05.259536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-08-29 17:15:05.259553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-08-29 17:15:05.259565 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-08-29 17:15:05.259576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-08-29 17:15:05.259587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-08-29 17:15:05.259598 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-08-29 17:15:05.259608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-08-29 17:15:05.259619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-08-29 17:15:05.259631 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-08-29 17:15:05.259661 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-08-29 17:15:05.259674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-08-29 17:15:05.259685 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-08-29 17:15:05.259700 | orchestrator | 2025-08-29 17:15:05.259714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:05.259726 | orchestrator | Friday 29 August 2025 17:14:57 +0000 (0:00:00.443) 0:00:16.806 ********* 2025-08-29 17:15:05.259738 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.259750 | orchestrator | 2025-08-29 17:15:05.259761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:05.259772 | orchestrator | Friday 29 August 2025 17:14:57 +0000 (0:00:00.211) 0:00:17.018 ********* 2025-08-29 17:15:05.259783 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.259794 | orchestrator | 2025-08-29 17:15:05.259805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:05.259816 | orchestrator | Friday 29 August 2025 17:14:57 +0000 (0:00:00.206) 0:00:17.224 ********* 2025-08-29 17:15:05.259827 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.259838 | orchestrator | 2025-08-29 17:15:05.259849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:05.259859 | orchestrator | Friday 29 August 2025 17:14:57 +0000 (0:00:00.195) 0:00:17.420 ********* 2025-08-29 17:15:05.259870 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.259904 | orchestrator | 2025-08-29 17:15:05.259916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:05.259928 | orchestrator | Friday 29 August 2025 17:14:58 +0000 (0:00:00.260) 0:00:17.681 ********* 2025-08-29 17:15:05.259938 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.259949 | orchestrator | 2025-08-29 17:15:05.259962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:05.259974 | orchestrator | Friday 29 August 2025 17:14:58 +0000 (0:00:00.744) 0:00:18.426 ********* 2025-08-29 17:15:05.259987 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.259999 | orchestrator | 2025-08-29 17:15:05.260012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:05.260024 | orchestrator | Friday 29 August 2025 17:14:59 +0000 (0:00:00.253) 0:00:18.679 ********* 2025-08-29 17:15:05.260036 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.260048 | orchestrator | 2025-08-29 17:15:05.260059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:05.260071 | orchestrator | Friday 29 August 2025 17:14:59 +0000 (0:00:00.222) 0:00:18.902 ********* 2025-08-29 17:15:05.260083 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.260095 | orchestrator | 2025-08-29 17:15:05.260108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:05.260120 | orchestrator | Friday 29 August 2025 17:14:59 +0000 (0:00:00.213) 0:00:19.115 ********* 2025-08-29 17:15:05.260132 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3) 2025-08-29 17:15:05.260145 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3) 2025-08-29 17:15:05.260157 | orchestrator | 2025-08-29 17:15:05.260169 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:05.260182 | orchestrator | Friday 29 August 2025 17:15:00 +0000 (0:00:00.415) 0:00:19.531 ********* 2025-08-29 17:15:05.260194 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fde2b913-b6a3-4677-89e6-f2f4f6c968dc) 2025-08-29 17:15:05.260230 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fde2b913-b6a3-4677-89e6-f2f4f6c968dc) 2025-08-29 17:15:05.260243 | orchestrator | 2025-08-29 17:15:05.260255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:05.260267 | orchestrator | Friday 29 August 2025 17:15:00 +0000 (0:00:00.472) 0:00:20.003 ********* 2025-08-29 17:15:05.260279 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d9d2a167-7162-4dbb-9ae3-4acc9c24be60) 2025-08-29 17:15:05.260292 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d9d2a167-7162-4dbb-9ae3-4acc9c24be60) 2025-08-29 17:15:05.260305 | orchestrator | 2025-08-29 17:15:05.260317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:05.260329 | orchestrator | Friday 29 August 2025 17:15:00 +0000 (0:00:00.436) 0:00:20.440 ********* 2025-08-29 17:15:05.260359 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_822f3ffb-f416-492b-baa5-9c709b0e03df) 2025-08-29 17:15:05.260371 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_822f3ffb-f416-492b-baa5-9c709b0e03df) 2025-08-29 17:15:05.260382 | orchestrator | 2025-08-29 17:15:05.260393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:05.260404 | orchestrator | Friday 29 August 2025 17:15:01 +0000 (0:00:00.447) 0:00:20.888 ********* 2025-08-29 17:15:05.260415 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 17:15:05.260426 | orchestrator | 2025-08-29 17:15:05.260437 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:05.260455 | orchestrator | Friday 29 August 2025 17:15:01 +0000 (0:00:00.336) 0:00:21.225 ********* 2025-08-29 17:15:05.260466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-08-29 17:15:05.260487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-08-29 17:15:05.260498 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-08-29 17:15:05.260508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-08-29 17:15:05.260519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-08-29 17:15:05.260530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-08-29 17:15:05.260540 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-08-29 17:15:05.260551 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-08-29 17:15:05.260562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-08-29 17:15:05.260574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-08-29 17:15:05.260584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-08-29 17:15:05.260595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-08-29 17:15:05.260605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-08-29 17:15:05.260616 | orchestrator | 2025-08-29 17:15:05.260627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:05.260638 | orchestrator | Friday 29 August 2025 17:15:02 +0000 (0:00:00.422) 0:00:21.647 ********* 2025-08-29 17:15:05.260649 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.260660 | orchestrator | 2025-08-29 17:15:05.260671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:05.260682 | orchestrator | Friday 29 August 2025 17:15:02 +0000 (0:00:00.208) 0:00:21.855 ********* 2025-08-29 17:15:05.260693 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.260704 | orchestrator | 2025-08-29 17:15:05.260715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:05.260726 | orchestrator | Friday 29 August 2025 17:15:03 +0000 (0:00:00.700) 0:00:22.556 ********* 2025-08-29 17:15:05.260737 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.260747 | orchestrator | 2025-08-29 17:15:05.260758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:05.260769 | orchestrator | Friday 29 August 2025 17:15:03 +0000 (0:00:00.240) 0:00:22.797 ********* 2025-08-29 17:15:05.260780 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.260791 | orchestrator | 2025-08-29 17:15:05.260802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:05.260813 | orchestrator | Friday 29 August 2025 17:15:03 +0000 (0:00:00.213) 0:00:23.010 ********* 2025-08-29 17:15:05.260823 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.260834 | orchestrator | 2025-08-29 17:15:05.260845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:05.260856 | orchestrator | Friday 29 August 2025 17:15:03 +0000 (0:00:00.194) 0:00:23.204 ********* 2025-08-29 17:15:05.260866 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.260877 | orchestrator | 2025-08-29 17:15:05.260888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:05.260899 | orchestrator | Friday 29 August 2025 17:15:03 +0000 (0:00:00.206) 0:00:23.411 ********* 2025-08-29 17:15:05.260909 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.260920 | orchestrator | 2025-08-29 17:15:05.260931 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:05.260942 | orchestrator | Friday 29 August 2025 17:15:04 +0000 (0:00:00.216) 0:00:23.627 ********* 2025-08-29 17:15:05.260953 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.260965 | orchestrator | 2025-08-29 17:15:05.260976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:05.260993 | orchestrator | Friday 29 August 2025 17:15:04 +0000 (0:00:00.195) 0:00:23.823 ********* 2025-08-29 17:15:05.261004 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-08-29 17:15:05.261016 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-08-29 17:15:05.261027 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-08-29 17:15:05.261038 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-08-29 17:15:05.261049 | orchestrator | 2025-08-29 17:15:05.261059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:05.261070 | orchestrator | Friday 29 August 2025 17:15:05 +0000 (0:00:00.699) 0:00:24.523 ********* 2025-08-29 17:15:05.261081 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:05.261092 | orchestrator | 2025-08-29 17:15:05.261110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:12.910107 | orchestrator | Friday 29 August 2025 17:15:05 +0000 (0:00:00.215) 0:00:24.739 ********* 2025-08-29 17:15:12.910278 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:12.910298 | orchestrator | 2025-08-29 17:15:12.910312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:12.910324 | orchestrator | Friday 29 August 2025 17:15:05 +0000 (0:00:00.223) 0:00:24.963 ********* 2025-08-29 17:15:12.910335 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:12.910347 | orchestrator | 2025-08-29 17:15:12.910358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:12.910369 | orchestrator | Friday 29 August 2025 17:15:05 +0000 (0:00:00.190) 0:00:25.153 ********* 2025-08-29 17:15:12.910380 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:12.910391 | orchestrator | 2025-08-29 17:15:12.910423 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 17:15:12.910435 | orchestrator | Friday 29 August 2025 17:15:05 +0000 (0:00:00.208) 0:00:25.361 ********* 2025-08-29 17:15:12.910462 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-08-29 17:15:12.910473 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-08-29 17:15:12.910484 | orchestrator | 2025-08-29 17:15:12.910495 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 17:15:12.910506 | orchestrator | Friday 29 August 2025 17:15:06 +0000 (0:00:00.395) 0:00:25.757 ********* 2025-08-29 17:15:12.910517 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:12.910528 | orchestrator | 2025-08-29 17:15:12.910539 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 17:15:12.910550 | orchestrator | Friday 29 August 2025 17:15:06 +0000 (0:00:00.155) 0:00:25.912 ********* 2025-08-29 17:15:12.910561 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:12.910572 | orchestrator | 2025-08-29 17:15:12.910584 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 17:15:12.910597 | orchestrator | Friday 29 August 2025 17:15:06 +0000 (0:00:00.132) 0:00:26.045 ********* 2025-08-29 17:15:12.910610 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:12.910622 | orchestrator | 2025-08-29 17:15:12.910634 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 17:15:12.910646 | orchestrator | Friday 29 August 2025 17:15:06 +0000 (0:00:00.132) 0:00:26.177 ********* 2025-08-29 17:15:12.910658 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:15:12.910672 | orchestrator | 2025-08-29 17:15:12.910684 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 17:15:12.910696 | orchestrator | Friday 29 August 2025 17:15:06 +0000 (0:00:00.140) 0:00:26.318 ********* 2025-08-29 17:15:12.910709 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7cc16d54-75e9-5c21-b21a-878ce6efb3d6'}}) 2025-08-29 17:15:12.910721 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '53dd44b5-7849-5101-9e2a-fd90ac927c8f'}}) 2025-08-29 17:15:12.910733 | orchestrator | 2025-08-29 17:15:12.910746 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 17:15:12.910783 | orchestrator | Friday 29 August 2025 17:15:07 +0000 (0:00:00.180) 0:00:26.498 ********* 2025-08-29 17:15:12.910797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7cc16d54-75e9-5c21-b21a-878ce6efb3d6'}})  2025-08-29 17:15:12.910811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '53dd44b5-7849-5101-9e2a-fd90ac927c8f'}})  2025-08-29 17:15:12.910824 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:12.910836 | orchestrator | 2025-08-29 17:15:12.910848 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 17:15:12.910861 | orchestrator | Friday 29 August 2025 17:15:07 +0000 (0:00:00.167) 0:00:26.665 ********* 2025-08-29 17:15:12.910873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7cc16d54-75e9-5c21-b21a-878ce6efb3d6'}})  2025-08-29 17:15:12.910885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '53dd44b5-7849-5101-9e2a-fd90ac927c8f'}})  2025-08-29 17:15:12.910898 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:12.910910 | orchestrator | 2025-08-29 17:15:12.910922 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 17:15:12.910934 | orchestrator | Friday 29 August 2025 17:15:07 +0000 (0:00:00.155) 0:00:26.821 ********* 2025-08-29 17:15:12.910946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7cc16d54-75e9-5c21-b21a-878ce6efb3d6'}})  2025-08-29 17:15:12.910958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '53dd44b5-7849-5101-9e2a-fd90ac927c8f'}})  2025-08-29 17:15:12.910969 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:12.910979 | orchestrator | 2025-08-29 17:15:12.910990 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 17:15:12.911001 | orchestrator | Friday 29 August 2025 17:15:07 +0000 (0:00:00.154) 0:00:26.976 ********* 2025-08-29 17:15:12.911012 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:15:12.911023 | orchestrator | 2025-08-29 17:15:12.911034 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 17:15:12.911044 | orchestrator | Friday 29 August 2025 17:15:07 +0000 (0:00:00.135) 0:00:27.111 ********* 2025-08-29 17:15:12.911055 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:15:12.911066 | orchestrator | 2025-08-29 17:15:12.911076 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 17:15:12.911087 | orchestrator | Friday 29 August 2025 17:15:07 +0000 (0:00:00.142) 0:00:27.253 ********* 2025-08-29 17:15:12.911098 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:12.911109 | orchestrator | 2025-08-29 17:15:12.911137 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 17:15:12.911148 | orchestrator | Friday 29 August 2025 17:15:07 +0000 (0:00:00.158) 0:00:27.411 ********* 2025-08-29 17:15:12.911159 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:12.911170 | orchestrator | 2025-08-29 17:15:12.911181 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 17:15:12.911192 | orchestrator | Friday 29 August 2025 17:15:08 +0000 (0:00:00.380) 0:00:27.792 ********* 2025-08-29 17:15:12.911202 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:12.911231 | orchestrator | 2025-08-29 17:15:12.911242 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 17:15:12.911253 | orchestrator | Friday 29 August 2025 17:15:08 +0000 (0:00:00.143) 0:00:27.936 ********* 2025-08-29 17:15:12.911264 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 17:15:12.911275 | orchestrator |  "ceph_osd_devices": { 2025-08-29 17:15:12.911286 | orchestrator |  "sdb": { 2025-08-29 17:15:12.911297 | orchestrator |  "osd_lvm_uuid": "7cc16d54-75e9-5c21-b21a-878ce6efb3d6" 2025-08-29 17:15:12.911308 | orchestrator |  }, 2025-08-29 17:15:12.911319 | orchestrator |  "sdc": { 2025-08-29 17:15:12.911339 | orchestrator |  "osd_lvm_uuid": "53dd44b5-7849-5101-9e2a-fd90ac927c8f" 2025-08-29 17:15:12.911350 | orchestrator |  } 2025-08-29 17:15:12.911361 | orchestrator |  } 2025-08-29 17:15:12.911372 | orchestrator | } 2025-08-29 17:15:12.911384 | orchestrator | 2025-08-29 17:15:12.911395 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 17:15:12.911406 | orchestrator | Friday 29 August 2025 17:15:08 +0000 (0:00:00.144) 0:00:28.081 ********* 2025-08-29 17:15:12.911416 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:12.911427 | orchestrator | 2025-08-29 17:15:12.911444 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 17:15:12.911456 | orchestrator | Friday 29 August 2025 17:15:08 +0000 (0:00:00.141) 0:00:28.222 ********* 2025-08-29 17:15:12.911466 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:12.911477 | orchestrator | 2025-08-29 17:15:12.911488 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 17:15:12.911498 | orchestrator | Friday 29 August 2025 17:15:08 +0000 (0:00:00.172) 0:00:28.395 ********* 2025-08-29 17:15:12.911509 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:15:12.911520 | orchestrator | 2025-08-29 17:15:12.911530 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 17:15:12.911541 | orchestrator | Friday 29 August 2025 17:15:09 +0000 (0:00:00.225) 0:00:28.621 ********* 2025-08-29 17:15:12.911552 | orchestrator | changed: [testbed-node-4] => { 2025-08-29 17:15:12.911563 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 17:15:12.911574 | orchestrator |  "ceph_osd_devices": { 2025-08-29 17:15:12.911584 | orchestrator |  "sdb": { 2025-08-29 17:15:12.911595 | orchestrator |  "osd_lvm_uuid": "7cc16d54-75e9-5c21-b21a-878ce6efb3d6" 2025-08-29 17:15:12.911611 | orchestrator |  }, 2025-08-29 17:15:12.911622 | orchestrator |  "sdc": { 2025-08-29 17:15:12.911633 | orchestrator |  "osd_lvm_uuid": "53dd44b5-7849-5101-9e2a-fd90ac927c8f" 2025-08-29 17:15:12.911643 | orchestrator |  } 2025-08-29 17:15:12.911654 | orchestrator |  }, 2025-08-29 17:15:12.911665 | orchestrator |  "lvm_volumes": [ 2025-08-29 17:15:12.911676 | orchestrator |  { 2025-08-29 17:15:12.911687 | orchestrator |  "data": "osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6", 2025-08-29 17:15:12.911697 | orchestrator |  "data_vg": "ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6" 2025-08-29 17:15:12.911708 | orchestrator |  }, 2025-08-29 17:15:12.911719 | orchestrator |  { 2025-08-29 17:15:12.911730 | orchestrator |  "data": "osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f", 2025-08-29 17:15:12.911741 | orchestrator |  "data_vg": "ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f" 2025-08-29 17:15:12.911752 | orchestrator |  } 2025-08-29 17:15:12.911763 | orchestrator |  ] 2025-08-29 17:15:12.911774 | orchestrator |  } 2025-08-29 17:15:12.911785 | orchestrator | } 2025-08-29 17:15:12.911795 | orchestrator | 2025-08-29 17:15:12.911806 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 17:15:12.911817 | orchestrator | Friday 29 August 2025 17:15:09 +0000 (0:00:00.261) 0:00:28.883 ********* 2025-08-29 17:15:12.911828 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 17:15:12.911838 | orchestrator | 2025-08-29 17:15:12.911849 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 17:15:12.911860 | orchestrator | 2025-08-29 17:15:12.911871 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 17:15:12.911882 | orchestrator | Friday 29 August 2025 17:15:10 +0000 (0:00:01.210) 0:00:30.094 ********* 2025-08-29 17:15:12.911892 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 17:15:12.911903 | orchestrator | 2025-08-29 17:15:12.911914 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 17:15:12.911924 | orchestrator | Friday 29 August 2025 17:15:11 +0000 (0:00:00.745) 0:00:30.840 ********* 2025-08-29 17:15:12.911941 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:15:12.911952 | orchestrator | 2025-08-29 17:15:12.911963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:12.911973 | orchestrator | Friday 29 August 2025 17:15:12 +0000 (0:00:00.880) 0:00:31.720 ********* 2025-08-29 17:15:12.911984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-08-29 17:15:12.911995 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-08-29 17:15:12.912006 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-08-29 17:15:12.912016 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-08-29 17:15:12.912027 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-08-29 17:15:12.912038 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-08-29 17:15:12.912054 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-08-29 17:15:23.437453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-08-29 17:15:23.437582 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-08-29 17:15:23.437598 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-08-29 17:15:23.438393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-08-29 17:15:23.438413 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-08-29 17:15:23.438425 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-08-29 17:15:23.438435 | orchestrator | 2025-08-29 17:15:23.438446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:23.438457 | orchestrator | Friday 29 August 2025 17:15:12 +0000 (0:00:00.663) 0:00:32.384 ********* 2025-08-29 17:15:23.438466 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.438477 | orchestrator | 2025-08-29 17:15:23.438487 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:23.438496 | orchestrator | Friday 29 August 2025 17:15:13 +0000 (0:00:00.267) 0:00:32.651 ********* 2025-08-29 17:15:23.438506 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.438515 | orchestrator | 2025-08-29 17:15:23.438525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:23.438534 | orchestrator | Friday 29 August 2025 17:15:13 +0000 (0:00:00.250) 0:00:32.901 ********* 2025-08-29 17:15:23.438544 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.438553 | orchestrator | 2025-08-29 17:15:23.438563 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:23.438572 | orchestrator | Friday 29 August 2025 17:15:13 +0000 (0:00:00.213) 0:00:33.114 ********* 2025-08-29 17:15:23.438582 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.438591 | orchestrator | 2025-08-29 17:15:23.438601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:23.438610 | orchestrator | Friday 29 August 2025 17:15:13 +0000 (0:00:00.312) 0:00:33.427 ********* 2025-08-29 17:15:23.438620 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.438629 | orchestrator | 2025-08-29 17:15:23.438639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:23.438649 | orchestrator | Friday 29 August 2025 17:15:14 +0000 (0:00:00.286) 0:00:33.714 ********* 2025-08-29 17:15:23.438658 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.438667 | orchestrator | 2025-08-29 17:15:23.438677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:23.438686 | orchestrator | Friday 29 August 2025 17:15:14 +0000 (0:00:00.265) 0:00:33.979 ********* 2025-08-29 17:15:23.438696 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.438727 | orchestrator | 2025-08-29 17:15:23.438738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:23.438747 | orchestrator | Friday 29 August 2025 17:15:14 +0000 (0:00:00.223) 0:00:34.203 ********* 2025-08-29 17:15:23.438757 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.438766 | orchestrator | 2025-08-29 17:15:23.438792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:23.438803 | orchestrator | Friday 29 August 2025 17:15:14 +0000 (0:00:00.250) 0:00:34.453 ********* 2025-08-29 17:15:23.438813 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92) 2025-08-29 17:15:23.438824 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92) 2025-08-29 17:15:23.438833 | orchestrator | 2025-08-29 17:15:23.438843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:23.438853 | orchestrator | Friday 29 August 2025 17:15:15 +0000 (0:00:00.710) 0:00:35.163 ********* 2025-08-29 17:15:23.438862 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4c7a21bb-a31e-489d-9d8e-9d9677eceddb) 2025-08-29 17:15:23.438872 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4c7a21bb-a31e-489d-9d8e-9d9677eceddb) 2025-08-29 17:15:23.438881 | orchestrator | 2025-08-29 17:15:23.438891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:23.438900 | orchestrator | Friday 29 August 2025 17:15:16 +0000 (0:00:01.102) 0:00:36.265 ********* 2025-08-29 17:15:23.438910 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a2504ec8-b4e2-4484-b80f-e6a3be658c85) 2025-08-29 17:15:23.438919 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a2504ec8-b4e2-4484-b80f-e6a3be658c85) 2025-08-29 17:15:23.438929 | orchestrator | 2025-08-29 17:15:23.438938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:23.438948 | orchestrator | Friday 29 August 2025 17:15:17 +0000 (0:00:00.567) 0:00:36.833 ********* 2025-08-29 17:15:23.438957 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f77776fc-d93a-4a3c-99b5-8bf2997ddc70) 2025-08-29 17:15:23.438967 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f77776fc-d93a-4a3c-99b5-8bf2997ddc70) 2025-08-29 17:15:23.438976 | orchestrator | 2025-08-29 17:15:23.438986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:15:23.438995 | orchestrator | Friday 29 August 2025 17:15:17 +0000 (0:00:00.619) 0:00:37.452 ********* 2025-08-29 17:15:23.439004 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 17:15:23.439014 | orchestrator | 2025-08-29 17:15:23.439023 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:23.439032 | orchestrator | Friday 29 August 2025 17:15:18 +0000 (0:00:00.398) 0:00:37.851 ********* 2025-08-29 17:15:23.439061 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-08-29 17:15:23.439071 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-08-29 17:15:23.439080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-08-29 17:15:23.439090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-08-29 17:15:23.439099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-08-29 17:15:23.439108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-08-29 17:15:23.439117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-08-29 17:15:23.439127 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-08-29 17:15:23.439137 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-08-29 17:15:23.439154 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-08-29 17:15:23.439164 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-08-29 17:15:23.439173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-08-29 17:15:23.439182 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-08-29 17:15:23.439192 | orchestrator | 2025-08-29 17:15:23.439201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:23.439211 | orchestrator | Friday 29 August 2025 17:15:18 +0000 (0:00:00.589) 0:00:38.440 ********* 2025-08-29 17:15:23.439241 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.439251 | orchestrator | 2025-08-29 17:15:23.439260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:23.439269 | orchestrator | Friday 29 August 2025 17:15:19 +0000 (0:00:00.256) 0:00:38.697 ********* 2025-08-29 17:15:23.439279 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.439288 | orchestrator | 2025-08-29 17:15:23.439299 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:23.439308 | orchestrator | Friday 29 August 2025 17:15:19 +0000 (0:00:00.235) 0:00:38.932 ********* 2025-08-29 17:15:23.439318 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.439327 | orchestrator | 2025-08-29 17:15:23.439337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:23.439346 | orchestrator | Friday 29 August 2025 17:15:19 +0000 (0:00:00.365) 0:00:39.298 ********* 2025-08-29 17:15:23.439356 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.439365 | orchestrator | 2025-08-29 17:15:23.439375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:23.439384 | orchestrator | Friday 29 August 2025 17:15:20 +0000 (0:00:00.259) 0:00:39.557 ********* 2025-08-29 17:15:23.439394 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.439403 | orchestrator | 2025-08-29 17:15:23.439413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:23.439422 | orchestrator | Friday 29 August 2025 17:15:20 +0000 (0:00:00.222) 0:00:39.780 ********* 2025-08-29 17:15:23.439432 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.439441 | orchestrator | 2025-08-29 17:15:23.439450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:23.439460 | orchestrator | Friday 29 August 2025 17:15:21 +0000 (0:00:00.852) 0:00:40.633 ********* 2025-08-29 17:15:23.439470 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.439479 | orchestrator | 2025-08-29 17:15:23.439489 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:23.439498 | orchestrator | Friday 29 August 2025 17:15:21 +0000 (0:00:00.300) 0:00:40.933 ********* 2025-08-29 17:15:23.439508 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.439517 | orchestrator | 2025-08-29 17:15:23.439526 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:23.439536 | orchestrator | Friday 29 August 2025 17:15:21 +0000 (0:00:00.332) 0:00:41.267 ********* 2025-08-29 17:15:23.439545 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-08-29 17:15:23.439555 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-08-29 17:15:23.439565 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-08-29 17:15:23.439574 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-08-29 17:15:23.439584 | orchestrator | 2025-08-29 17:15:23.439593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:23.439603 | orchestrator | Friday 29 August 2025 17:15:22 +0000 (0:00:00.793) 0:00:42.060 ********* 2025-08-29 17:15:23.439612 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.439622 | orchestrator | 2025-08-29 17:15:23.439632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:23.439647 | orchestrator | Friday 29 August 2025 17:15:22 +0000 (0:00:00.224) 0:00:42.284 ********* 2025-08-29 17:15:23.439656 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.439666 | orchestrator | 2025-08-29 17:15:23.439675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:23.439685 | orchestrator | Friday 29 August 2025 17:15:23 +0000 (0:00:00.225) 0:00:42.510 ********* 2025-08-29 17:15:23.439695 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.439704 | orchestrator | 2025-08-29 17:15:23.439713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:15:23.439723 | orchestrator | Friday 29 August 2025 17:15:23 +0000 (0:00:00.198) 0:00:42.709 ********* 2025-08-29 17:15:23.439737 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:23.439747 | orchestrator | 2025-08-29 17:15:23.439757 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 17:15:23.439771 | orchestrator | Friday 29 August 2025 17:15:23 +0000 (0:00:00.200) 0:00:42.909 ********* 2025-08-29 17:15:28.666988 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-08-29 17:15:28.667089 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-08-29 17:15:28.667104 | orchestrator | 2025-08-29 17:15:28.667117 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 17:15:28.667129 | orchestrator | Friday 29 August 2025 17:15:23 +0000 (0:00:00.210) 0:00:43.120 ********* 2025-08-29 17:15:28.667140 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:28.667152 | orchestrator | 2025-08-29 17:15:28.667163 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 17:15:28.667175 | orchestrator | Friday 29 August 2025 17:15:23 +0000 (0:00:00.155) 0:00:43.275 ********* 2025-08-29 17:15:28.667186 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:28.667197 | orchestrator | 2025-08-29 17:15:28.667208 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 17:15:28.667255 | orchestrator | Friday 29 August 2025 17:15:23 +0000 (0:00:00.166) 0:00:43.442 ********* 2025-08-29 17:15:28.667267 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:28.667278 | orchestrator | 2025-08-29 17:15:28.667289 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 17:15:28.667301 | orchestrator | Friday 29 August 2025 17:15:24 +0000 (0:00:00.146) 0:00:43.588 ********* 2025-08-29 17:15:28.667311 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:15:28.667323 | orchestrator | 2025-08-29 17:15:28.667334 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 17:15:28.667346 | orchestrator | Friday 29 August 2025 17:15:24 +0000 (0:00:00.376) 0:00:43.964 ********* 2025-08-29 17:15:28.667358 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a4c19265-6381-5c6d-bd77-cfabc91aafa2'}}) 2025-08-29 17:15:28.667369 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'}}) 2025-08-29 17:15:28.667380 | orchestrator | 2025-08-29 17:15:28.667392 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 17:15:28.667403 | orchestrator | Friday 29 August 2025 17:15:24 +0000 (0:00:00.177) 0:00:44.142 ********* 2025-08-29 17:15:28.667414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a4c19265-6381-5c6d-bd77-cfabc91aafa2'}})  2025-08-29 17:15:28.667427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'}})  2025-08-29 17:15:28.667439 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:28.667450 | orchestrator | 2025-08-29 17:15:28.667478 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 17:15:28.667490 | orchestrator | Friday 29 August 2025 17:15:24 +0000 (0:00:00.160) 0:00:44.303 ********* 2025-08-29 17:15:28.667501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a4c19265-6381-5c6d-bd77-cfabc91aafa2'}})  2025-08-29 17:15:28.667535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'}})  2025-08-29 17:15:28.667549 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:28.667561 | orchestrator | 2025-08-29 17:15:28.667573 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 17:15:28.667585 | orchestrator | Friday 29 August 2025 17:15:24 +0000 (0:00:00.163) 0:00:44.466 ********* 2025-08-29 17:15:28.667598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a4c19265-6381-5c6d-bd77-cfabc91aafa2'}})  2025-08-29 17:15:28.667610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'}})  2025-08-29 17:15:28.667622 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:28.667634 | orchestrator | 2025-08-29 17:15:28.667646 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 17:15:28.667659 | orchestrator | Friday 29 August 2025 17:15:25 +0000 (0:00:00.182) 0:00:44.648 ********* 2025-08-29 17:15:28.667671 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:15:28.667683 | orchestrator | 2025-08-29 17:15:28.667696 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 17:15:28.667708 | orchestrator | Friday 29 August 2025 17:15:25 +0000 (0:00:00.178) 0:00:44.827 ********* 2025-08-29 17:15:28.667720 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:15:28.667732 | orchestrator | 2025-08-29 17:15:28.667744 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 17:15:28.667757 | orchestrator | Friday 29 August 2025 17:15:25 +0000 (0:00:00.164) 0:00:44.991 ********* 2025-08-29 17:15:28.667769 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:28.667781 | orchestrator | 2025-08-29 17:15:28.667793 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 17:15:28.667805 | orchestrator | Friday 29 August 2025 17:15:25 +0000 (0:00:00.165) 0:00:45.156 ********* 2025-08-29 17:15:28.667818 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:28.667830 | orchestrator | 2025-08-29 17:15:28.667842 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 17:15:28.667855 | orchestrator | Friday 29 August 2025 17:15:25 +0000 (0:00:00.163) 0:00:45.319 ********* 2025-08-29 17:15:28.667867 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:28.667880 | orchestrator | 2025-08-29 17:15:28.667891 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 17:15:28.667902 | orchestrator | Friday 29 August 2025 17:15:26 +0000 (0:00:00.180) 0:00:45.499 ********* 2025-08-29 17:15:28.667913 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 17:15:28.667923 | orchestrator |  "ceph_osd_devices": { 2025-08-29 17:15:28.667934 | orchestrator |  "sdb": { 2025-08-29 17:15:28.667946 | orchestrator |  "osd_lvm_uuid": "a4c19265-6381-5c6d-bd77-cfabc91aafa2" 2025-08-29 17:15:28.667975 | orchestrator |  }, 2025-08-29 17:15:28.667986 | orchestrator |  "sdc": { 2025-08-29 17:15:28.667997 | orchestrator |  "osd_lvm_uuid": "b12c38cd-5c6b-5ee1-93c6-dbb5afb60591" 2025-08-29 17:15:28.668008 | orchestrator |  } 2025-08-29 17:15:28.668019 | orchestrator |  } 2025-08-29 17:15:28.668030 | orchestrator | } 2025-08-29 17:15:28.668042 | orchestrator | 2025-08-29 17:15:28.668053 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 17:15:28.668064 | orchestrator | Friday 29 August 2025 17:15:26 +0000 (0:00:00.183) 0:00:45.682 ********* 2025-08-29 17:15:28.668075 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:28.668086 | orchestrator | 2025-08-29 17:15:28.668097 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 17:15:28.668107 | orchestrator | Friday 29 August 2025 17:15:26 +0000 (0:00:00.177) 0:00:45.860 ********* 2025-08-29 17:15:28.668118 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:28.668129 | orchestrator | 2025-08-29 17:15:28.668140 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 17:15:28.668158 | orchestrator | Friday 29 August 2025 17:15:26 +0000 (0:00:00.599) 0:00:46.459 ********* 2025-08-29 17:15:28.668169 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:15:28.668179 | orchestrator | 2025-08-29 17:15:28.668190 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 17:15:28.668201 | orchestrator | Friday 29 August 2025 17:15:27 +0000 (0:00:00.179) 0:00:46.639 ********* 2025-08-29 17:15:28.668212 | orchestrator | changed: [testbed-node-5] => { 2025-08-29 17:15:28.668280 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 17:15:28.668292 | orchestrator |  "ceph_osd_devices": { 2025-08-29 17:15:28.668303 | orchestrator |  "sdb": { 2025-08-29 17:15:28.668314 | orchestrator |  "osd_lvm_uuid": "a4c19265-6381-5c6d-bd77-cfabc91aafa2" 2025-08-29 17:15:28.668325 | orchestrator |  }, 2025-08-29 17:15:28.668336 | orchestrator |  "sdc": { 2025-08-29 17:15:28.668347 | orchestrator |  "osd_lvm_uuid": "b12c38cd-5c6b-5ee1-93c6-dbb5afb60591" 2025-08-29 17:15:28.668358 | orchestrator |  } 2025-08-29 17:15:28.668369 | orchestrator |  }, 2025-08-29 17:15:28.668380 | orchestrator |  "lvm_volumes": [ 2025-08-29 17:15:28.668391 | orchestrator |  { 2025-08-29 17:15:28.668402 | orchestrator |  "data": "osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2", 2025-08-29 17:15:28.668413 | orchestrator |  "data_vg": "ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2" 2025-08-29 17:15:28.668423 | orchestrator |  }, 2025-08-29 17:15:28.668434 | orchestrator |  { 2025-08-29 17:15:28.668445 | orchestrator |  "data": "osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591", 2025-08-29 17:15:28.668456 | orchestrator |  "data_vg": "ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591" 2025-08-29 17:15:28.668467 | orchestrator |  } 2025-08-29 17:15:28.668478 | orchestrator |  ] 2025-08-29 17:15:28.668489 | orchestrator |  } 2025-08-29 17:15:28.668503 | orchestrator | } 2025-08-29 17:15:28.668515 | orchestrator | 2025-08-29 17:15:28.668526 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 17:15:28.668537 | orchestrator | Friday 29 August 2025 17:15:27 +0000 (0:00:00.408) 0:00:47.048 ********* 2025-08-29 17:15:28.668547 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 17:15:28.668558 | orchestrator | 2025-08-29 17:15:28.668569 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:15:28.668589 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 17:15:28.668603 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 17:15:28.668614 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 17:15:28.668625 | orchestrator | 2025-08-29 17:15:28.668636 | orchestrator | 2025-08-29 17:15:28.668647 | orchestrator | 2025-08-29 17:15:28.668658 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:15:28.668669 | orchestrator | Friday 29 August 2025 17:15:28 +0000 (0:00:01.065) 0:00:48.113 ********* 2025-08-29 17:15:28.668680 | orchestrator | =============================================================================== 2025-08-29 17:15:28.668690 | orchestrator | Write configuration file ------------------------------------------------ 4.60s 2025-08-29 17:15:28.668701 | orchestrator | Add known links to the list of available block devices ------------------ 1.47s 2025-08-29 17:15:28.668711 | orchestrator | Add known partitions to the list of available block devices ------------- 1.43s 2025-08-29 17:15:28.668722 | orchestrator | Get initial list of available block devices ----------------------------- 1.38s 2025-08-29 17:15:28.668733 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.28s 2025-08-29 17:15:28.668751 | orchestrator | Add known partitions to the list of available block devices ------------- 1.14s 2025-08-29 17:15:28.668762 | orchestrator | Add known links to the list of available block devices ------------------ 1.10s 2025-08-29 17:15:28.668773 | orchestrator | Print DB devices -------------------------------------------------------- 0.92s 2025-08-29 17:15:28.668783 | orchestrator | Print configuration data ------------------------------------------------ 0.91s 2025-08-29 17:15:28.668794 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2025-08-29 17:15:28.668805 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2025-08-29 17:15:28.668815 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.79s 2025-08-29 17:15:28.668826 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2025-08-29 17:15:28.668837 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2025-08-29 17:15:28.668855 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2025-08-29 17:15:29.084698 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-08-29 17:15:29.084765 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-08-29 17:15:29.084771 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2025-08-29 17:15:29.084776 | orchestrator | Set WAL devices config data --------------------------------------------- 0.68s 2025-08-29 17:15:29.084780 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.68s 2025-08-29 17:15:51.950058 | orchestrator | 2025-08-29 17:15:51 | INFO  | Task 178bdf64-9edd-497d-b481-6e284a38e36a (sync inventory) is running in background. Output coming soon. 2025-08-29 17:16:18.858156 | orchestrator | 2025-08-29 17:15:53 | INFO  | Starting group_vars file reorganization 2025-08-29 17:16:18.858311 | orchestrator | 2025-08-29 17:15:53 | INFO  | Moved 0 file(s) to their respective directories 2025-08-29 17:16:18.858328 | orchestrator | 2025-08-29 17:15:53 | INFO  | Group_vars file reorganization completed 2025-08-29 17:16:18.858339 | orchestrator | 2025-08-29 17:15:56 | INFO  | Starting variable preparation from inventory 2025-08-29 17:16:18.858350 | orchestrator | 2025-08-29 17:16:00 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-08-29 17:16:18.858360 | orchestrator | 2025-08-29 17:16:00 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-08-29 17:16:18.858370 | orchestrator | 2025-08-29 17:16:00 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-08-29 17:16:18.858379 | orchestrator | 2025-08-29 17:16:00 | INFO  | 3 file(s) written, 6 host(s) processed 2025-08-29 17:16:18.858389 | orchestrator | 2025-08-29 17:16:00 | INFO  | Variable preparation completed 2025-08-29 17:16:18.858399 | orchestrator | 2025-08-29 17:16:01 | INFO  | Starting inventory overwrite handling 2025-08-29 17:16:18.858409 | orchestrator | 2025-08-29 17:16:01 | INFO  | Handling group overwrites in 99-overwrite 2025-08-29 17:16:18.858419 | orchestrator | 2025-08-29 17:16:01 | INFO  | Removing group frr:children from 60-generic 2025-08-29 17:16:18.858429 | orchestrator | 2025-08-29 17:16:01 | INFO  | Removing group storage:children from 50-kolla 2025-08-29 17:16:18.858439 | orchestrator | 2025-08-29 17:16:01 | INFO  | Removing group netbird:children from 50-infrastruture 2025-08-29 17:16:18.858448 | orchestrator | 2025-08-29 17:16:01 | INFO  | Removing group ceph-mds from 50-ceph 2025-08-29 17:16:18.858459 | orchestrator | 2025-08-29 17:16:01 | INFO  | Removing group ceph-rgw from 50-ceph 2025-08-29 17:16:18.858468 | orchestrator | 2025-08-29 17:16:01 | INFO  | Handling group overwrites in 20-roles 2025-08-29 17:16:18.858478 | orchestrator | 2025-08-29 17:16:01 | INFO  | Removing group k3s_node from 50-infrastruture 2025-08-29 17:16:18.858514 | orchestrator | 2025-08-29 17:16:01 | INFO  | Removed 6 group(s) in total 2025-08-29 17:16:18.858524 | orchestrator | 2025-08-29 17:16:01 | INFO  | Inventory overwrite handling completed 2025-08-29 17:16:18.858534 | orchestrator | 2025-08-29 17:16:02 | INFO  | Starting merge of inventory files 2025-08-29 17:16:18.858544 | orchestrator | 2025-08-29 17:16:02 | INFO  | Inventory files merged successfully 2025-08-29 17:16:18.858553 | orchestrator | 2025-08-29 17:16:07 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-08-29 17:16:18.858563 | orchestrator | 2025-08-29 17:16:17 | INFO  | Successfully wrote ClusterShell configuration 2025-08-29 17:16:18.858573 | orchestrator | [master 2737f29] 2025-08-29-17-16 2025-08-29 17:16:18.858584 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-08-29 17:16:20.797493 | orchestrator | 2025-08-29 17:16:20 | INFO  | Task a9d19126-9866-4011-a328-3b226814f822 (ceph-create-lvm-devices) was prepared for execution. 2025-08-29 17:16:20.797581 | orchestrator | 2025-08-29 17:16:20 | INFO  | It takes a moment until task a9d19126-9866-4011-a328-3b226814f822 (ceph-create-lvm-devices) has been started and output is visible here. 2025-08-29 17:16:33.820029 | orchestrator | 2025-08-29 17:16:33.820131 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 17:16:33.820142 | orchestrator | 2025-08-29 17:16:33.820150 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 17:16:33.820158 | orchestrator | Friday 29 August 2025 17:16:25 +0000 (0:00:00.339) 0:00:00.339 ********* 2025-08-29 17:16:33.820165 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 17:16:33.820172 | orchestrator | 2025-08-29 17:16:33.820179 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 17:16:33.820186 | orchestrator | Friday 29 August 2025 17:16:25 +0000 (0:00:00.294) 0:00:00.633 ********* 2025-08-29 17:16:33.820193 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:16:33.820201 | orchestrator | 2025-08-29 17:16:33.820208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:33.820214 | orchestrator | Friday 29 August 2025 17:16:25 +0000 (0:00:00.288) 0:00:00.922 ********* 2025-08-29 17:16:33.820221 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-08-29 17:16:33.820229 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-08-29 17:16:33.820236 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-08-29 17:16:33.820243 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-08-29 17:16:33.820249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-08-29 17:16:33.820306 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-08-29 17:16:33.820314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-08-29 17:16:33.820320 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-08-29 17:16:33.820327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-08-29 17:16:33.820334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-08-29 17:16:33.820340 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-08-29 17:16:33.820347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-08-29 17:16:33.820353 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-08-29 17:16:33.820360 | orchestrator | 2025-08-29 17:16:33.820366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:33.820393 | orchestrator | Friday 29 August 2025 17:16:26 +0000 (0:00:00.531) 0:00:01.454 ********* 2025-08-29 17:16:33.820400 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:33.820407 | orchestrator | 2025-08-29 17:16:33.820414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:33.820435 | orchestrator | Friday 29 August 2025 17:16:26 +0000 (0:00:00.591) 0:00:02.046 ********* 2025-08-29 17:16:33.820442 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:33.820449 | orchestrator | 2025-08-29 17:16:33.820455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:33.820462 | orchestrator | Friday 29 August 2025 17:16:27 +0000 (0:00:00.231) 0:00:02.277 ********* 2025-08-29 17:16:33.820472 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:33.820478 | orchestrator | 2025-08-29 17:16:33.820485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:33.820491 | orchestrator | Friday 29 August 2025 17:16:27 +0000 (0:00:00.224) 0:00:02.502 ********* 2025-08-29 17:16:33.820498 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:33.820504 | orchestrator | 2025-08-29 17:16:33.820511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:33.820517 | orchestrator | Friday 29 August 2025 17:16:27 +0000 (0:00:00.253) 0:00:02.755 ********* 2025-08-29 17:16:33.820524 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:33.820530 | orchestrator | 2025-08-29 17:16:33.820537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:33.820544 | orchestrator | Friday 29 August 2025 17:16:27 +0000 (0:00:00.240) 0:00:02.996 ********* 2025-08-29 17:16:33.820550 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:33.820556 | orchestrator | 2025-08-29 17:16:33.820563 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:33.820569 | orchestrator | Friday 29 August 2025 17:16:28 +0000 (0:00:00.207) 0:00:03.204 ********* 2025-08-29 17:16:33.820576 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:33.820582 | orchestrator | 2025-08-29 17:16:33.820590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:33.820597 | orchestrator | Friday 29 August 2025 17:16:28 +0000 (0:00:00.238) 0:00:03.443 ********* 2025-08-29 17:16:33.820604 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:33.820612 | orchestrator | 2025-08-29 17:16:33.820619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:33.820626 | orchestrator | Friday 29 August 2025 17:16:28 +0000 (0:00:00.221) 0:00:03.664 ********* 2025-08-29 17:16:33.820633 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585) 2025-08-29 17:16:33.820642 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585) 2025-08-29 17:16:33.820649 | orchestrator | 2025-08-29 17:16:33.820657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:33.820664 | orchestrator | Friday 29 August 2025 17:16:28 +0000 (0:00:00.445) 0:00:04.110 ********* 2025-08-29 17:16:33.820685 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_82706033-d7aa-4ff6-a1c3-b9e917369a8a) 2025-08-29 17:16:33.820693 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_82706033-d7aa-4ff6-a1c3-b9e917369a8a) 2025-08-29 17:16:33.820701 | orchestrator | 2025-08-29 17:16:33.820708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:33.820715 | orchestrator | Friday 29 August 2025 17:16:29 +0000 (0:00:00.530) 0:00:04.640 ********* 2025-08-29 17:16:33.820723 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2fca4086-decf-4125-8890-d41999d174b7) 2025-08-29 17:16:33.820730 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2fca4086-decf-4125-8890-d41999d174b7) 2025-08-29 17:16:33.820738 | orchestrator | 2025-08-29 17:16:33.820745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:33.820759 | orchestrator | Friday 29 August 2025 17:16:30 +0000 (0:00:00.658) 0:00:05.299 ********* 2025-08-29 17:16:33.820767 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_95e03a76-aedc-4db5-a6e6-17eb5f40fbcd) 2025-08-29 17:16:33.820774 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_95e03a76-aedc-4db5-a6e6-17eb5f40fbcd) 2025-08-29 17:16:33.820782 | orchestrator | 2025-08-29 17:16:33.820789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:33.820797 | orchestrator | Friday 29 August 2025 17:16:31 +0000 (0:00:00.916) 0:00:06.215 ********* 2025-08-29 17:16:33.820804 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 17:16:33.820811 | orchestrator | 2025-08-29 17:16:33.820819 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:16:33.820826 | orchestrator | Friday 29 August 2025 17:16:31 +0000 (0:00:00.381) 0:00:06.596 ********* 2025-08-29 17:16:33.820833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-08-29 17:16:33.820840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-08-29 17:16:33.820848 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-08-29 17:16:33.820855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-08-29 17:16:33.820862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-08-29 17:16:33.820869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-08-29 17:16:33.820877 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-08-29 17:16:33.820884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-08-29 17:16:33.820891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-08-29 17:16:33.820898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-08-29 17:16:33.820906 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-08-29 17:16:33.820913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-08-29 17:16:33.820923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-08-29 17:16:33.820935 | orchestrator | 2025-08-29 17:16:33.820946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:16:33.820958 | orchestrator | Friday 29 August 2025 17:16:31 +0000 (0:00:00.493) 0:00:07.090 ********* 2025-08-29 17:16:33.820970 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:33.820981 | orchestrator | 2025-08-29 17:16:33.820991 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:16:33.821002 | orchestrator | Friday 29 August 2025 17:16:32 +0000 (0:00:00.239) 0:00:07.329 ********* 2025-08-29 17:16:33.821013 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:33.821024 | orchestrator | 2025-08-29 17:16:33.821034 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:16:33.821046 | orchestrator | Friday 29 August 2025 17:16:32 +0000 (0:00:00.216) 0:00:07.546 ********* 2025-08-29 17:16:33.821057 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:33.821069 | orchestrator | 2025-08-29 17:16:33.821080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:16:33.821090 | orchestrator | Friday 29 August 2025 17:16:32 +0000 (0:00:00.226) 0:00:07.772 ********* 2025-08-29 17:16:33.821096 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:33.821103 | orchestrator | 2025-08-29 17:16:33.821110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:16:33.821123 | orchestrator | Friday 29 August 2025 17:16:32 +0000 (0:00:00.226) 0:00:07.999 ********* 2025-08-29 17:16:33.821129 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:33.821136 | orchestrator | 2025-08-29 17:16:33.821143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:16:33.821149 | orchestrator | Friday 29 August 2025 17:16:33 +0000 (0:00:00.269) 0:00:08.268 ********* 2025-08-29 17:16:33.821156 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:33.821162 | orchestrator | 2025-08-29 17:16:33.821169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:16:33.821175 | orchestrator | Friday 29 August 2025 17:16:33 +0000 (0:00:00.225) 0:00:08.494 ********* 2025-08-29 17:16:33.821182 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:33.821189 | orchestrator | 2025-08-29 17:16:33.821195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:16:33.821202 | orchestrator | Friday 29 August 2025 17:16:33 +0000 (0:00:00.254) 0:00:08.749 ********* 2025-08-29 17:16:33.821214 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.012700 | orchestrator | 2025-08-29 17:16:42.012815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:16:42.012833 | orchestrator | Friday 29 August 2025 17:16:33 +0000 (0:00:00.251) 0:00:09.000 ********* 2025-08-29 17:16:42.012846 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-08-29 17:16:42.012858 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-08-29 17:16:42.012870 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-08-29 17:16:42.012880 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-08-29 17:16:42.012891 | orchestrator | 2025-08-29 17:16:42.012902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:16:42.012914 | orchestrator | Friday 29 August 2025 17:16:35 +0000 (0:00:01.263) 0:00:10.264 ********* 2025-08-29 17:16:42.012925 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.012935 | orchestrator | 2025-08-29 17:16:42.012951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:16:42.012971 | orchestrator | Friday 29 August 2025 17:16:35 +0000 (0:00:00.203) 0:00:10.468 ********* 2025-08-29 17:16:42.012991 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.013010 | orchestrator | 2025-08-29 17:16:42.013029 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:16:42.013049 | orchestrator | Friday 29 August 2025 17:16:35 +0000 (0:00:00.257) 0:00:10.725 ********* 2025-08-29 17:16:42.013064 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.013075 | orchestrator | 2025-08-29 17:16:42.013086 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:16:42.013098 | orchestrator | Friday 29 August 2025 17:16:35 +0000 (0:00:00.215) 0:00:10.940 ********* 2025-08-29 17:16:42.013109 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.013120 | orchestrator | 2025-08-29 17:16:42.013131 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 17:16:42.013142 | orchestrator | Friday 29 August 2025 17:16:35 +0000 (0:00:00.221) 0:00:11.161 ********* 2025-08-29 17:16:42.013153 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.013164 | orchestrator | 2025-08-29 17:16:42.013174 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 17:16:42.013186 | orchestrator | Friday 29 August 2025 17:16:36 +0000 (0:00:00.149) 0:00:11.311 ********* 2025-08-29 17:16:42.013198 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b00dade2-f82b-53af-89a3-8c9250354ec6'}}) 2025-08-29 17:16:42.013209 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8088253a-7e26-529d-8fdb-0f472c9bb5d3'}}) 2025-08-29 17:16:42.013220 | orchestrator | 2025-08-29 17:16:42.013231 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 17:16:42.013245 | orchestrator | Friday 29 August 2025 17:16:36 +0000 (0:00:00.201) 0:00:11.513 ********* 2025-08-29 17:16:42.013287 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'}) 2025-08-29 17:16:42.013326 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'}) 2025-08-29 17:16:42.013338 | orchestrator | 2025-08-29 17:16:42.013369 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 17:16:42.013387 | orchestrator | Friday 29 August 2025 17:16:38 +0000 (0:00:02.046) 0:00:13.559 ********* 2025-08-29 17:16:42.013400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:42.013413 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:42.013426 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.013438 | orchestrator | 2025-08-29 17:16:42.013451 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 17:16:42.013463 | orchestrator | Friday 29 August 2025 17:16:38 +0000 (0:00:00.153) 0:00:13.713 ********* 2025-08-29 17:16:42.013476 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'}) 2025-08-29 17:16:42.013489 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'}) 2025-08-29 17:16:42.013501 | orchestrator | 2025-08-29 17:16:42.013513 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 17:16:42.013526 | orchestrator | Friday 29 August 2025 17:16:39 +0000 (0:00:01.426) 0:00:15.139 ********* 2025-08-29 17:16:42.013538 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:42.013551 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:42.013564 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.013576 | orchestrator | 2025-08-29 17:16:42.013589 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 17:16:42.013600 | orchestrator | Friday 29 August 2025 17:16:40 +0000 (0:00:00.149) 0:00:15.289 ********* 2025-08-29 17:16:42.013611 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.013622 | orchestrator | 2025-08-29 17:16:42.013633 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 17:16:42.013662 | orchestrator | Friday 29 August 2025 17:16:40 +0000 (0:00:00.141) 0:00:15.431 ********* 2025-08-29 17:16:42.013674 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:42.013685 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:42.013696 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.013706 | orchestrator | 2025-08-29 17:16:42.013717 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 17:16:42.013728 | orchestrator | Friday 29 August 2025 17:16:40 +0000 (0:00:00.364) 0:00:15.795 ********* 2025-08-29 17:16:42.013738 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.013749 | orchestrator | 2025-08-29 17:16:42.013759 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 17:16:42.013770 | orchestrator | Friday 29 August 2025 17:16:40 +0000 (0:00:00.141) 0:00:15.936 ********* 2025-08-29 17:16:42.013781 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:42.013800 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:42.013811 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.013821 | orchestrator | 2025-08-29 17:16:42.013832 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 17:16:42.013843 | orchestrator | Friday 29 August 2025 17:16:40 +0000 (0:00:00.171) 0:00:16.108 ********* 2025-08-29 17:16:42.013854 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.013864 | orchestrator | 2025-08-29 17:16:42.013875 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 17:16:42.013886 | orchestrator | Friday 29 August 2025 17:16:41 +0000 (0:00:00.134) 0:00:16.242 ********* 2025-08-29 17:16:42.013896 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:42.013907 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:42.013918 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.013929 | orchestrator | 2025-08-29 17:16:42.013939 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 17:16:42.013960 | orchestrator | Friday 29 August 2025 17:16:41 +0000 (0:00:00.122) 0:00:16.364 ********* 2025-08-29 17:16:42.013978 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:16:42.013996 | orchestrator | 2025-08-29 17:16:42.014086 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 17:16:42.014103 | orchestrator | Friday 29 August 2025 17:16:41 +0000 (0:00:00.125) 0:00:16.490 ********* 2025-08-29 17:16:42.014121 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:42.014132 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:42.014143 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.014154 | orchestrator | 2025-08-29 17:16:42.014165 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 17:16:42.014176 | orchestrator | Friday 29 August 2025 17:16:41 +0000 (0:00:00.138) 0:00:16.628 ********* 2025-08-29 17:16:42.014187 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:42.014198 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:42.014209 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.014219 | orchestrator | 2025-08-29 17:16:42.014230 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 17:16:42.014241 | orchestrator | Friday 29 August 2025 17:16:41 +0000 (0:00:00.150) 0:00:16.779 ********* 2025-08-29 17:16:42.014252 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:42.014313 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:42.014327 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.014338 | orchestrator | 2025-08-29 17:16:42.014349 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 17:16:42.014360 | orchestrator | Friday 29 August 2025 17:16:41 +0000 (0:00:00.147) 0:00:16.927 ********* 2025-08-29 17:16:42.014370 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.014390 | orchestrator | 2025-08-29 17:16:42.014401 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 17:16:42.014413 | orchestrator | Friday 29 August 2025 17:16:41 +0000 (0:00:00.138) 0:00:17.066 ********* 2025-08-29 17:16:42.014424 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:42.014435 | orchestrator | 2025-08-29 17:16:42.014454 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 17:16:48.739372 | orchestrator | Friday 29 August 2025 17:16:42 +0000 (0:00:00.133) 0:00:17.199 ********* 2025-08-29 17:16:48.739489 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.739507 | orchestrator | 2025-08-29 17:16:48.739520 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 17:16:48.739532 | orchestrator | Friday 29 August 2025 17:16:42 +0000 (0:00:00.120) 0:00:17.320 ********* 2025-08-29 17:16:48.739544 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 17:16:48.739556 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 17:16:48.739567 | orchestrator | } 2025-08-29 17:16:48.739579 | orchestrator | 2025-08-29 17:16:48.739590 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 17:16:48.739601 | orchestrator | Friday 29 August 2025 17:16:42 +0000 (0:00:00.274) 0:00:17.594 ********* 2025-08-29 17:16:48.739612 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 17:16:48.739623 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 17:16:48.739634 | orchestrator | } 2025-08-29 17:16:48.739645 | orchestrator | 2025-08-29 17:16:48.739656 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 17:16:48.739667 | orchestrator | Friday 29 August 2025 17:16:42 +0000 (0:00:00.121) 0:00:17.716 ********* 2025-08-29 17:16:48.739678 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 17:16:48.739689 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 17:16:48.739700 | orchestrator | } 2025-08-29 17:16:48.739712 | orchestrator | 2025-08-29 17:16:48.739723 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 17:16:48.739734 | orchestrator | Friday 29 August 2025 17:16:42 +0000 (0:00:00.178) 0:00:17.895 ********* 2025-08-29 17:16:48.739745 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:16:48.739756 | orchestrator | 2025-08-29 17:16:48.739767 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 17:16:48.739778 | orchestrator | Friday 29 August 2025 17:16:43 +0000 (0:00:00.645) 0:00:18.540 ********* 2025-08-29 17:16:48.739789 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:16:48.739800 | orchestrator | 2025-08-29 17:16:48.739811 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 17:16:48.739822 | orchestrator | Friday 29 August 2025 17:16:43 +0000 (0:00:00.498) 0:00:19.038 ********* 2025-08-29 17:16:48.739834 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:16:48.739846 | orchestrator | 2025-08-29 17:16:48.739858 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 17:16:48.739870 | orchestrator | Friday 29 August 2025 17:16:44 +0000 (0:00:00.499) 0:00:19.538 ********* 2025-08-29 17:16:48.739883 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:16:48.739895 | orchestrator | 2025-08-29 17:16:48.739907 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 17:16:48.739919 | orchestrator | Friday 29 August 2025 17:16:44 +0000 (0:00:00.133) 0:00:19.671 ********* 2025-08-29 17:16:48.739931 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.739943 | orchestrator | 2025-08-29 17:16:48.739956 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 17:16:48.739968 | orchestrator | Friday 29 August 2025 17:16:44 +0000 (0:00:00.112) 0:00:19.783 ********* 2025-08-29 17:16:48.739981 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.740000 | orchestrator | 2025-08-29 17:16:48.740019 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 17:16:48.740039 | orchestrator | Friday 29 August 2025 17:16:44 +0000 (0:00:00.090) 0:00:19.874 ********* 2025-08-29 17:16:48.740089 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 17:16:48.740110 | orchestrator |  "vgs_report": { 2025-08-29 17:16:48.740131 | orchestrator |  "vg": [] 2025-08-29 17:16:48.740146 | orchestrator |  } 2025-08-29 17:16:48.740158 | orchestrator | } 2025-08-29 17:16:48.740170 | orchestrator | 2025-08-29 17:16:48.740182 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 17:16:48.740194 | orchestrator | Friday 29 August 2025 17:16:44 +0000 (0:00:00.162) 0:00:20.036 ********* 2025-08-29 17:16:48.740204 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.740215 | orchestrator | 2025-08-29 17:16:48.740226 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 17:16:48.740237 | orchestrator | Friday 29 August 2025 17:16:44 +0000 (0:00:00.135) 0:00:20.172 ********* 2025-08-29 17:16:48.740248 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.740258 | orchestrator | 2025-08-29 17:16:48.740299 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 17:16:48.740311 | orchestrator | Friday 29 August 2025 17:16:45 +0000 (0:00:00.138) 0:00:20.311 ********* 2025-08-29 17:16:48.740322 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.740332 | orchestrator | 2025-08-29 17:16:48.740343 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 17:16:48.740354 | orchestrator | Friday 29 August 2025 17:16:45 +0000 (0:00:00.370) 0:00:20.681 ********* 2025-08-29 17:16:48.740365 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.740375 | orchestrator | 2025-08-29 17:16:48.740386 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 17:16:48.740397 | orchestrator | Friday 29 August 2025 17:16:45 +0000 (0:00:00.151) 0:00:20.833 ********* 2025-08-29 17:16:48.740408 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.740418 | orchestrator | 2025-08-29 17:16:48.740447 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 17:16:48.740458 | orchestrator | Friday 29 August 2025 17:16:45 +0000 (0:00:00.146) 0:00:20.979 ********* 2025-08-29 17:16:48.740469 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.740480 | orchestrator | 2025-08-29 17:16:48.740490 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 17:16:48.740501 | orchestrator | Friday 29 August 2025 17:16:45 +0000 (0:00:00.165) 0:00:21.145 ********* 2025-08-29 17:16:48.740512 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.740523 | orchestrator | 2025-08-29 17:16:48.740533 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 17:16:48.740544 | orchestrator | Friday 29 August 2025 17:16:46 +0000 (0:00:00.151) 0:00:21.297 ********* 2025-08-29 17:16:48.740555 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.740566 | orchestrator | 2025-08-29 17:16:48.740577 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 17:16:48.740607 | orchestrator | Friday 29 August 2025 17:16:46 +0000 (0:00:00.136) 0:00:21.434 ********* 2025-08-29 17:16:48.740618 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.740629 | orchestrator | 2025-08-29 17:16:48.740640 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 17:16:48.740651 | orchestrator | Friday 29 August 2025 17:16:46 +0000 (0:00:00.144) 0:00:21.578 ********* 2025-08-29 17:16:48.740662 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.740673 | orchestrator | 2025-08-29 17:16:48.740683 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 17:16:48.740694 | orchestrator | Friday 29 August 2025 17:16:46 +0000 (0:00:00.179) 0:00:21.758 ********* 2025-08-29 17:16:48.740705 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.740715 | orchestrator | 2025-08-29 17:16:48.740726 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 17:16:48.740737 | orchestrator | Friday 29 August 2025 17:16:46 +0000 (0:00:00.176) 0:00:21.935 ********* 2025-08-29 17:16:48.740748 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.740759 | orchestrator | 2025-08-29 17:16:48.740779 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 17:16:48.740791 | orchestrator | Friday 29 August 2025 17:16:46 +0000 (0:00:00.184) 0:00:22.119 ********* 2025-08-29 17:16:48.740801 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.740812 | orchestrator | 2025-08-29 17:16:48.740823 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 17:16:48.740841 | orchestrator | Friday 29 August 2025 17:16:47 +0000 (0:00:00.169) 0:00:22.288 ********* 2025-08-29 17:16:48.740858 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.740877 | orchestrator | 2025-08-29 17:16:48.740896 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 17:16:48.740915 | orchestrator | Friday 29 August 2025 17:16:47 +0000 (0:00:00.178) 0:00:22.467 ********* 2025-08-29 17:16:48.740932 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:48.740945 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:48.740956 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.740967 | orchestrator | 2025-08-29 17:16:48.740978 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 17:16:48.740996 | orchestrator | Friday 29 August 2025 17:16:47 +0000 (0:00:00.466) 0:00:22.933 ********* 2025-08-29 17:16:48.741014 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:48.741032 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:48.741052 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.741071 | orchestrator | 2025-08-29 17:16:48.741089 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 17:16:48.741102 | orchestrator | Friday 29 August 2025 17:16:47 +0000 (0:00:00.203) 0:00:23.136 ********* 2025-08-29 17:16:48.741128 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:48.741140 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:48.741151 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.741162 | orchestrator | 2025-08-29 17:16:48.741173 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 17:16:48.741184 | orchestrator | Friday 29 August 2025 17:16:48 +0000 (0:00:00.195) 0:00:23.332 ********* 2025-08-29 17:16:48.741195 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:48.741206 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:48.741217 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.741228 | orchestrator | 2025-08-29 17:16:48.741239 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 17:16:48.741250 | orchestrator | Friday 29 August 2025 17:16:48 +0000 (0:00:00.200) 0:00:23.533 ********* 2025-08-29 17:16:48.741261 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:48.741305 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:48.741317 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:48.741336 | orchestrator | 2025-08-29 17:16:48.741347 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 17:16:48.741358 | orchestrator | Friday 29 August 2025 17:16:48 +0000 (0:00:00.187) 0:00:23.720 ********* 2025-08-29 17:16:48.741369 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:48.741389 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:54.372755 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:54.372872 | orchestrator | 2025-08-29 17:16:54.372889 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 17:16:54.372903 | orchestrator | Friday 29 August 2025 17:16:48 +0000 (0:00:00.205) 0:00:23.926 ********* 2025-08-29 17:16:54.372915 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:54.372929 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:54.372940 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:54.372951 | orchestrator | 2025-08-29 17:16:54.372964 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 17:16:54.372983 | orchestrator | Friday 29 August 2025 17:16:48 +0000 (0:00:00.166) 0:00:24.093 ********* 2025-08-29 17:16:54.373001 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:54.373020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:54.373041 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:54.373059 | orchestrator | 2025-08-29 17:16:54.373077 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 17:16:54.373094 | orchestrator | Friday 29 August 2025 17:16:49 +0000 (0:00:00.145) 0:00:24.238 ********* 2025-08-29 17:16:54.373110 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:16:54.373126 | orchestrator | 2025-08-29 17:16:54.373143 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 17:16:54.373162 | orchestrator | Friday 29 August 2025 17:16:49 +0000 (0:00:00.537) 0:00:24.775 ********* 2025-08-29 17:16:54.373179 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:16:54.373199 | orchestrator | 2025-08-29 17:16:54.373217 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 17:16:54.373239 | orchestrator | Friday 29 August 2025 17:16:50 +0000 (0:00:00.516) 0:00:25.292 ********* 2025-08-29 17:16:54.373257 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:16:54.373302 | orchestrator | 2025-08-29 17:16:54.373316 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 17:16:54.373328 | orchestrator | Friday 29 August 2025 17:16:50 +0000 (0:00:00.158) 0:00:25.450 ********* 2025-08-29 17:16:54.373341 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'vg_name': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'}) 2025-08-29 17:16:54.373355 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'vg_name': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'}) 2025-08-29 17:16:54.373367 | orchestrator | 2025-08-29 17:16:54.373380 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 17:16:54.373392 | orchestrator | Friday 29 August 2025 17:16:50 +0000 (0:00:00.183) 0:00:25.634 ********* 2025-08-29 17:16:54.373405 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:54.373446 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:54.373459 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:54.373471 | orchestrator | 2025-08-29 17:16:54.373482 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 17:16:54.373493 | orchestrator | Friday 29 August 2025 17:16:50 +0000 (0:00:00.389) 0:00:26.024 ********* 2025-08-29 17:16:54.373504 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:54.373515 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:54.373526 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:54.373536 | orchestrator | 2025-08-29 17:16:54.373547 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 17:16:54.373558 | orchestrator | Friday 29 August 2025 17:16:50 +0000 (0:00:00.165) 0:00:26.189 ********* 2025-08-29 17:16:54.373569 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'})  2025-08-29 17:16:54.373580 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'})  2025-08-29 17:16:54.373591 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:16:54.373602 | orchestrator | 2025-08-29 17:16:54.373612 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 17:16:54.373623 | orchestrator | Friday 29 August 2025 17:16:51 +0000 (0:00:00.188) 0:00:26.377 ********* 2025-08-29 17:16:54.373634 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 17:16:54.373645 | orchestrator |  "lvm_report": { 2025-08-29 17:16:54.373657 | orchestrator |  "lv": [ 2025-08-29 17:16:54.373668 | orchestrator |  { 2025-08-29 17:16:54.373699 | orchestrator |  "lv_name": "osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3", 2025-08-29 17:16:54.373712 | orchestrator |  "vg_name": "ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3" 2025-08-29 17:16:54.373723 | orchestrator |  }, 2025-08-29 17:16:54.373733 | orchestrator |  { 2025-08-29 17:16:54.373745 | orchestrator |  "lv_name": "osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6", 2025-08-29 17:16:54.373756 | orchestrator |  "vg_name": "ceph-b00dade2-f82b-53af-89a3-8c9250354ec6" 2025-08-29 17:16:54.373766 | orchestrator |  } 2025-08-29 17:16:54.373777 | orchestrator |  ], 2025-08-29 17:16:54.373788 | orchestrator |  "pv": [ 2025-08-29 17:16:54.373799 | orchestrator |  { 2025-08-29 17:16:54.373810 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 17:16:54.373821 | orchestrator |  "vg_name": "ceph-b00dade2-f82b-53af-89a3-8c9250354ec6" 2025-08-29 17:16:54.373832 | orchestrator |  }, 2025-08-29 17:16:54.373843 | orchestrator |  { 2025-08-29 17:16:54.373854 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 17:16:54.373865 | orchestrator |  "vg_name": "ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3" 2025-08-29 17:16:54.373875 | orchestrator |  } 2025-08-29 17:16:54.373886 | orchestrator |  ] 2025-08-29 17:16:54.373897 | orchestrator |  } 2025-08-29 17:16:54.373908 | orchestrator | } 2025-08-29 17:16:54.373919 | orchestrator | 2025-08-29 17:16:54.373930 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 17:16:54.373942 | orchestrator | 2025-08-29 17:16:54.373953 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 17:16:54.373964 | orchestrator | Friday 29 August 2025 17:16:51 +0000 (0:00:00.288) 0:00:26.666 ********* 2025-08-29 17:16:54.373975 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 17:16:54.374007 | orchestrator | 2025-08-29 17:16:54.374091 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 17:16:54.374102 | orchestrator | Friday 29 August 2025 17:16:51 +0000 (0:00:00.264) 0:00:26.931 ********* 2025-08-29 17:16:54.374113 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:16:54.374124 | orchestrator | 2025-08-29 17:16:54.374135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:54.374145 | orchestrator | Friday 29 August 2025 17:16:51 +0000 (0:00:00.253) 0:00:27.184 ********* 2025-08-29 17:16:54.374207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-08-29 17:16:54.374220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-08-29 17:16:54.374231 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-08-29 17:16:54.374242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-08-29 17:16:54.374253 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-08-29 17:16:54.374264 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-08-29 17:16:54.374295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-08-29 17:16:54.374312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-08-29 17:16:54.374323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-08-29 17:16:54.374334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-08-29 17:16:54.374345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-08-29 17:16:54.374356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-08-29 17:16:54.374367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-08-29 17:16:54.374378 | orchestrator | 2025-08-29 17:16:54.374389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:54.374400 | orchestrator | Friday 29 August 2025 17:16:52 +0000 (0:00:00.429) 0:00:27.613 ********* 2025-08-29 17:16:54.374411 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:16:54.374421 | orchestrator | 2025-08-29 17:16:54.374432 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:54.374443 | orchestrator | Friday 29 August 2025 17:16:52 +0000 (0:00:00.212) 0:00:27.826 ********* 2025-08-29 17:16:54.374454 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:16:54.374464 | orchestrator | 2025-08-29 17:16:54.374475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:54.374486 | orchestrator | Friday 29 August 2025 17:16:52 +0000 (0:00:00.205) 0:00:28.031 ********* 2025-08-29 17:16:54.374497 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:16:54.374507 | orchestrator | 2025-08-29 17:16:54.374518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:54.374529 | orchestrator | Friday 29 August 2025 17:16:53 +0000 (0:00:00.646) 0:00:28.677 ********* 2025-08-29 17:16:54.374539 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:16:54.374550 | orchestrator | 2025-08-29 17:16:54.374560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:54.374571 | orchestrator | Friday 29 August 2025 17:16:53 +0000 (0:00:00.216) 0:00:28.894 ********* 2025-08-29 17:16:54.374582 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:16:54.374592 | orchestrator | 2025-08-29 17:16:54.374603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:54.374614 | orchestrator | Friday 29 August 2025 17:16:53 +0000 (0:00:00.246) 0:00:29.140 ********* 2025-08-29 17:16:54.374624 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:16:54.374635 | orchestrator | 2025-08-29 17:16:54.374653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:16:54.374664 | orchestrator | Friday 29 August 2025 17:16:54 +0000 (0:00:00.209) 0:00:29.349 ********* 2025-08-29 17:16:54.374675 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:16:54.374686 | orchestrator | 2025-08-29 17:16:54.374707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:05.123315 | orchestrator | Friday 29 August 2025 17:16:54 +0000 (0:00:00.207) 0:00:29.556 ********* 2025-08-29 17:17:05.123411 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:05.123426 | orchestrator | 2025-08-29 17:17:05.123439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:05.123451 | orchestrator | Friday 29 August 2025 17:16:54 +0000 (0:00:00.202) 0:00:29.759 ********* 2025-08-29 17:17:05.123462 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3) 2025-08-29 17:17:05.123475 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3) 2025-08-29 17:17:05.123486 | orchestrator | 2025-08-29 17:17:05.123497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:05.123508 | orchestrator | Friday 29 August 2025 17:16:54 +0000 (0:00:00.419) 0:00:30.178 ********* 2025-08-29 17:17:05.123520 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fde2b913-b6a3-4677-89e6-f2f4f6c968dc) 2025-08-29 17:17:05.123531 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fde2b913-b6a3-4677-89e6-f2f4f6c968dc) 2025-08-29 17:17:05.123542 | orchestrator | 2025-08-29 17:17:05.123553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:05.123564 | orchestrator | Friday 29 August 2025 17:16:55 +0000 (0:00:00.457) 0:00:30.636 ********* 2025-08-29 17:17:05.123575 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d9d2a167-7162-4dbb-9ae3-4acc9c24be60) 2025-08-29 17:17:05.123586 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d9d2a167-7162-4dbb-9ae3-4acc9c24be60) 2025-08-29 17:17:05.123596 | orchestrator | 2025-08-29 17:17:05.123607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:05.123619 | orchestrator | Friday 29 August 2025 17:16:55 +0000 (0:00:00.449) 0:00:31.086 ********* 2025-08-29 17:17:05.123629 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_822f3ffb-f416-492b-baa5-9c709b0e03df) 2025-08-29 17:17:05.123640 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_822f3ffb-f416-492b-baa5-9c709b0e03df) 2025-08-29 17:17:05.123651 | orchestrator | 2025-08-29 17:17:05.123662 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:05.123673 | orchestrator | Friday 29 August 2025 17:16:56 +0000 (0:00:00.616) 0:00:31.703 ********* 2025-08-29 17:17:05.123684 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 17:17:05.123695 | orchestrator | 2025-08-29 17:17:05.123706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:05.123717 | orchestrator | Friday 29 August 2025 17:16:56 +0000 (0:00:00.366) 0:00:32.069 ********* 2025-08-29 17:17:05.123728 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-08-29 17:17:05.123757 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-08-29 17:17:05.123768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-08-29 17:17:05.123779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-08-29 17:17:05.123790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-08-29 17:17:05.123801 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-08-29 17:17:05.123812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-08-29 17:17:05.123846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-08-29 17:17:05.123859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-08-29 17:17:05.123871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-08-29 17:17:05.123884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-08-29 17:17:05.123897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-08-29 17:17:05.123910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-08-29 17:17:05.123922 | orchestrator | 2025-08-29 17:17:05.123935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:05.123948 | orchestrator | Friday 29 August 2025 17:16:57 +0000 (0:00:00.748) 0:00:32.817 ********* 2025-08-29 17:17:05.123960 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:05.123973 | orchestrator | 2025-08-29 17:17:05.123985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:05.123998 | orchestrator | Friday 29 August 2025 17:16:57 +0000 (0:00:00.217) 0:00:33.035 ********* 2025-08-29 17:17:05.124011 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:05.124024 | orchestrator | 2025-08-29 17:17:05.124036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:05.124049 | orchestrator | Friday 29 August 2025 17:16:58 +0000 (0:00:00.210) 0:00:33.245 ********* 2025-08-29 17:17:05.124061 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:05.124074 | orchestrator | 2025-08-29 17:17:05.124086 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:05.124099 | orchestrator | Friday 29 August 2025 17:16:58 +0000 (0:00:00.199) 0:00:33.445 ********* 2025-08-29 17:17:05.124112 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:05.124125 | orchestrator | 2025-08-29 17:17:05.124153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:05.124167 | orchestrator | Friday 29 August 2025 17:16:58 +0000 (0:00:00.222) 0:00:33.668 ********* 2025-08-29 17:17:05.124179 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:05.124190 | orchestrator | 2025-08-29 17:17:05.124201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:05.124212 | orchestrator | Friday 29 August 2025 17:16:58 +0000 (0:00:00.205) 0:00:33.873 ********* 2025-08-29 17:17:05.124222 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:05.124233 | orchestrator | 2025-08-29 17:17:05.124244 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:05.124255 | orchestrator | Friday 29 August 2025 17:16:58 +0000 (0:00:00.212) 0:00:34.085 ********* 2025-08-29 17:17:05.124266 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:05.124319 | orchestrator | 2025-08-29 17:17:05.124340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:05.124352 | orchestrator | Friday 29 August 2025 17:16:59 +0000 (0:00:00.210) 0:00:34.296 ********* 2025-08-29 17:17:05.124363 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:05.124374 | orchestrator | 2025-08-29 17:17:05.124385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:05.124396 | orchestrator | Friday 29 August 2025 17:16:59 +0000 (0:00:00.220) 0:00:34.516 ********* 2025-08-29 17:17:05.124406 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-08-29 17:17:05.124417 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-08-29 17:17:05.124428 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-08-29 17:17:05.124439 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-08-29 17:17:05.124450 | orchestrator | 2025-08-29 17:17:05.124461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:05.124472 | orchestrator | Friday 29 August 2025 17:17:00 +0000 (0:00:00.910) 0:00:35.427 ********* 2025-08-29 17:17:05.124492 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:05.124503 | orchestrator | 2025-08-29 17:17:05.124514 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:05.124525 | orchestrator | Friday 29 August 2025 17:17:00 +0000 (0:00:00.212) 0:00:35.640 ********* 2025-08-29 17:17:05.124536 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:05.124546 | orchestrator | 2025-08-29 17:17:05.124557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:05.124573 | orchestrator | Friday 29 August 2025 17:17:00 +0000 (0:00:00.202) 0:00:35.842 ********* 2025-08-29 17:17:05.124591 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:05.124607 | orchestrator | 2025-08-29 17:17:05.124626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:05.124645 | orchestrator | Friday 29 August 2025 17:17:01 +0000 (0:00:00.650) 0:00:36.492 ********* 2025-08-29 17:17:05.124656 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:05.124667 | orchestrator | 2025-08-29 17:17:05.124678 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 17:17:05.124688 | orchestrator | Friday 29 August 2025 17:17:01 +0000 (0:00:00.220) 0:00:36.713 ********* 2025-08-29 17:17:05.124699 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:05.124710 | orchestrator | 2025-08-29 17:17:05.124720 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 17:17:05.124731 | orchestrator | Friday 29 August 2025 17:17:01 +0000 (0:00:00.160) 0:00:36.874 ********* 2025-08-29 17:17:05.124742 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7cc16d54-75e9-5c21-b21a-878ce6efb3d6'}}) 2025-08-29 17:17:05.124753 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '53dd44b5-7849-5101-9e2a-fd90ac927c8f'}}) 2025-08-29 17:17:05.124764 | orchestrator | 2025-08-29 17:17:05.124775 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 17:17:05.124785 | orchestrator | Friday 29 August 2025 17:17:01 +0000 (0:00:00.195) 0:00:37.070 ********* 2025-08-29 17:17:05.124797 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'}) 2025-08-29 17:17:05.124810 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'}) 2025-08-29 17:17:05.124821 | orchestrator | 2025-08-29 17:17:05.124831 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 17:17:05.124842 | orchestrator | Friday 29 August 2025 17:17:03 +0000 (0:00:01.757) 0:00:38.828 ********* 2025-08-29 17:17:05.124853 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:05.124865 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:05.124876 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:05.124887 | orchestrator | 2025-08-29 17:17:05.124897 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 17:17:05.124908 | orchestrator | Friday 29 August 2025 17:17:03 +0000 (0:00:00.185) 0:00:39.014 ********* 2025-08-29 17:17:05.124919 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'}) 2025-08-29 17:17:05.124930 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'}) 2025-08-29 17:17:05.124942 | orchestrator | 2025-08-29 17:17:05.124960 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 17:17:11.001245 | orchestrator | Friday 29 August 2025 17:17:05 +0000 (0:00:01.288) 0:00:40.302 ********* 2025-08-29 17:17:11.001434 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:11.001453 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:11.001465 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.001477 | orchestrator | 2025-08-29 17:17:11.001489 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 17:17:11.001500 | orchestrator | Friday 29 August 2025 17:17:05 +0000 (0:00:00.170) 0:00:40.472 ********* 2025-08-29 17:17:11.001511 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.001522 | orchestrator | 2025-08-29 17:17:11.001533 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 17:17:11.001544 | orchestrator | Friday 29 August 2025 17:17:05 +0000 (0:00:00.168) 0:00:40.641 ********* 2025-08-29 17:17:11.001555 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:11.001582 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:11.001593 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.001604 | orchestrator | 2025-08-29 17:17:11.001615 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 17:17:11.001626 | orchestrator | Friday 29 August 2025 17:17:05 +0000 (0:00:00.185) 0:00:40.827 ********* 2025-08-29 17:17:11.001636 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.001647 | orchestrator | 2025-08-29 17:17:11.001658 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 17:17:11.001669 | orchestrator | Friday 29 August 2025 17:17:05 +0000 (0:00:00.139) 0:00:40.966 ********* 2025-08-29 17:17:11.001680 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:11.001691 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:11.001702 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.001712 | orchestrator | 2025-08-29 17:17:11.001724 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 17:17:11.001734 | orchestrator | Friday 29 August 2025 17:17:05 +0000 (0:00:00.171) 0:00:41.138 ********* 2025-08-29 17:17:11.001750 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.001761 | orchestrator | 2025-08-29 17:17:11.001772 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 17:17:11.001785 | orchestrator | Friday 29 August 2025 17:17:06 +0000 (0:00:00.396) 0:00:41.534 ********* 2025-08-29 17:17:11.001799 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:11.001812 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:11.001824 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.001837 | orchestrator | 2025-08-29 17:17:11.001849 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 17:17:11.001862 | orchestrator | Friday 29 August 2025 17:17:06 +0000 (0:00:00.170) 0:00:41.705 ********* 2025-08-29 17:17:11.001874 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:17:11.001888 | orchestrator | 2025-08-29 17:17:11.001900 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 17:17:11.001913 | orchestrator | Friday 29 August 2025 17:17:06 +0000 (0:00:00.167) 0:00:41.872 ********* 2025-08-29 17:17:11.001933 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:11.001947 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:11.001960 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.001973 | orchestrator | 2025-08-29 17:17:11.001985 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 17:17:11.001998 | orchestrator | Friday 29 August 2025 17:17:06 +0000 (0:00:00.147) 0:00:42.020 ********* 2025-08-29 17:17:11.002010 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:11.002079 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:11.002092 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.002104 | orchestrator | 2025-08-29 17:17:11.002116 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 17:17:11.002128 | orchestrator | Friday 29 August 2025 17:17:06 +0000 (0:00:00.164) 0:00:42.184 ********* 2025-08-29 17:17:11.002158 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:11.002169 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:11.002180 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.002191 | orchestrator | 2025-08-29 17:17:11.002201 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 17:17:11.002212 | orchestrator | Friday 29 August 2025 17:17:07 +0000 (0:00:00.173) 0:00:42.358 ********* 2025-08-29 17:17:11.002223 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.002233 | orchestrator | 2025-08-29 17:17:11.002244 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 17:17:11.002254 | orchestrator | Friday 29 August 2025 17:17:07 +0000 (0:00:00.141) 0:00:42.500 ********* 2025-08-29 17:17:11.002265 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.002304 | orchestrator | 2025-08-29 17:17:11.002318 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 17:17:11.002328 | orchestrator | Friday 29 August 2025 17:17:07 +0000 (0:00:00.146) 0:00:42.646 ********* 2025-08-29 17:17:11.002339 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.002349 | orchestrator | 2025-08-29 17:17:11.002360 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 17:17:11.002371 | orchestrator | Friday 29 August 2025 17:17:07 +0000 (0:00:00.149) 0:00:42.795 ********* 2025-08-29 17:17:11.002381 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 17:17:11.002392 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 17:17:11.002403 | orchestrator | } 2025-08-29 17:17:11.002414 | orchestrator | 2025-08-29 17:17:11.002425 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 17:17:11.002436 | orchestrator | Friday 29 August 2025 17:17:07 +0000 (0:00:00.141) 0:00:42.937 ********* 2025-08-29 17:17:11.002446 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 17:17:11.002457 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 17:17:11.002468 | orchestrator | } 2025-08-29 17:17:11.002478 | orchestrator | 2025-08-29 17:17:11.002489 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 17:17:11.002499 | orchestrator | Friday 29 August 2025 17:17:07 +0000 (0:00:00.142) 0:00:43.080 ********* 2025-08-29 17:17:11.002510 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 17:17:11.002521 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 17:17:11.002540 | orchestrator | } 2025-08-29 17:17:11.002550 | orchestrator | 2025-08-29 17:17:11.002561 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 17:17:11.002572 | orchestrator | Friday 29 August 2025 17:17:08 +0000 (0:00:00.147) 0:00:43.227 ********* 2025-08-29 17:17:11.002583 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:17:11.002594 | orchestrator | 2025-08-29 17:17:11.002604 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 17:17:11.002616 | orchestrator | Friday 29 August 2025 17:17:08 +0000 (0:00:00.740) 0:00:43.968 ********* 2025-08-29 17:17:11.002632 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:17:11.002644 | orchestrator | 2025-08-29 17:17:11.002655 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 17:17:11.002665 | orchestrator | Friday 29 August 2025 17:17:09 +0000 (0:00:00.546) 0:00:44.514 ********* 2025-08-29 17:17:11.002676 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:17:11.002686 | orchestrator | 2025-08-29 17:17:11.002697 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 17:17:11.002708 | orchestrator | Friday 29 August 2025 17:17:09 +0000 (0:00:00.501) 0:00:45.015 ********* 2025-08-29 17:17:11.002719 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:17:11.002729 | orchestrator | 2025-08-29 17:17:11.002740 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 17:17:11.002751 | orchestrator | Friday 29 August 2025 17:17:09 +0000 (0:00:00.150) 0:00:45.165 ********* 2025-08-29 17:17:11.002762 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.002773 | orchestrator | 2025-08-29 17:17:11.002783 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 17:17:11.002794 | orchestrator | Friday 29 August 2025 17:17:10 +0000 (0:00:00.141) 0:00:45.307 ********* 2025-08-29 17:17:11.002805 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.002815 | orchestrator | 2025-08-29 17:17:11.002826 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 17:17:11.002837 | orchestrator | Friday 29 August 2025 17:17:10 +0000 (0:00:00.123) 0:00:45.430 ********* 2025-08-29 17:17:11.002848 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 17:17:11.002858 | orchestrator |  "vgs_report": { 2025-08-29 17:17:11.002870 | orchestrator |  "vg": [] 2025-08-29 17:17:11.002882 | orchestrator |  } 2025-08-29 17:17:11.002893 | orchestrator | } 2025-08-29 17:17:11.002904 | orchestrator | 2025-08-29 17:17:11.002914 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 17:17:11.002925 | orchestrator | Friday 29 August 2025 17:17:10 +0000 (0:00:00.171) 0:00:45.602 ********* 2025-08-29 17:17:11.002936 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.002946 | orchestrator | 2025-08-29 17:17:11.002958 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 17:17:11.002977 | orchestrator | Friday 29 August 2025 17:17:10 +0000 (0:00:00.147) 0:00:45.750 ********* 2025-08-29 17:17:11.002995 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.003010 | orchestrator | 2025-08-29 17:17:11.003028 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 17:17:11.003045 | orchestrator | Friday 29 August 2025 17:17:10 +0000 (0:00:00.145) 0:00:45.896 ********* 2025-08-29 17:17:11.003062 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.003080 | orchestrator | 2025-08-29 17:17:11.003100 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 17:17:11.003118 | orchestrator | Friday 29 August 2025 17:17:10 +0000 (0:00:00.143) 0:00:46.039 ********* 2025-08-29 17:17:11.003138 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:11.003150 | orchestrator | 2025-08-29 17:17:11.003161 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 17:17:11.003181 | orchestrator | Friday 29 August 2025 17:17:10 +0000 (0:00:00.144) 0:00:46.184 ********* 2025-08-29 17:17:16.015558 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.015662 | orchestrator | 2025-08-29 17:17:16.015702 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 17:17:16.015715 | orchestrator | Friday 29 August 2025 17:17:11 +0000 (0:00:00.138) 0:00:46.323 ********* 2025-08-29 17:17:16.015726 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.015737 | orchestrator | 2025-08-29 17:17:16.015749 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 17:17:16.015760 | orchestrator | Friday 29 August 2025 17:17:11 +0000 (0:00:00.376) 0:00:46.699 ********* 2025-08-29 17:17:16.015770 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.015781 | orchestrator | 2025-08-29 17:17:16.015791 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 17:17:16.015802 | orchestrator | Friday 29 August 2025 17:17:11 +0000 (0:00:00.145) 0:00:46.845 ********* 2025-08-29 17:17:16.015812 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.015823 | orchestrator | 2025-08-29 17:17:16.015834 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 17:17:16.015844 | orchestrator | Friday 29 August 2025 17:17:11 +0000 (0:00:00.151) 0:00:46.997 ********* 2025-08-29 17:17:16.015855 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.015865 | orchestrator | 2025-08-29 17:17:16.015876 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 17:17:16.015887 | orchestrator | Friday 29 August 2025 17:17:11 +0000 (0:00:00.156) 0:00:47.154 ********* 2025-08-29 17:17:16.015897 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.015908 | orchestrator | 2025-08-29 17:17:16.015918 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 17:17:16.015929 | orchestrator | Friday 29 August 2025 17:17:12 +0000 (0:00:00.163) 0:00:47.318 ********* 2025-08-29 17:17:16.015939 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.015950 | orchestrator | 2025-08-29 17:17:16.015961 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 17:17:16.015971 | orchestrator | Friday 29 August 2025 17:17:12 +0000 (0:00:00.158) 0:00:47.476 ********* 2025-08-29 17:17:16.015982 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.015992 | orchestrator | 2025-08-29 17:17:16.016003 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 17:17:16.016013 | orchestrator | Friday 29 August 2025 17:17:12 +0000 (0:00:00.174) 0:00:47.650 ********* 2025-08-29 17:17:16.016024 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.016034 | orchestrator | 2025-08-29 17:17:16.016045 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 17:17:16.016055 | orchestrator | Friday 29 August 2025 17:17:12 +0000 (0:00:00.174) 0:00:47.825 ********* 2025-08-29 17:17:16.016066 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.016076 | orchestrator | 2025-08-29 17:17:16.016087 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 17:17:16.016097 | orchestrator | Friday 29 August 2025 17:17:12 +0000 (0:00:00.196) 0:00:48.022 ********* 2025-08-29 17:17:16.016125 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:16.016141 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:16.016153 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.016166 | orchestrator | 2025-08-29 17:17:16.016179 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 17:17:16.016191 | orchestrator | Friday 29 August 2025 17:17:13 +0000 (0:00:00.186) 0:00:48.208 ********* 2025-08-29 17:17:16.016203 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:16.016216 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:16.016237 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.016250 | orchestrator | 2025-08-29 17:17:16.016262 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 17:17:16.016274 | orchestrator | Friday 29 August 2025 17:17:13 +0000 (0:00:00.148) 0:00:48.356 ********* 2025-08-29 17:17:16.016326 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:16.016346 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:16.016366 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.016385 | orchestrator | 2025-08-29 17:17:16.016405 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 17:17:16.016419 | orchestrator | Friday 29 August 2025 17:17:13 +0000 (0:00:00.162) 0:00:48.518 ********* 2025-08-29 17:17:16.016432 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:16.016445 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:16.016458 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.016470 | orchestrator | 2025-08-29 17:17:16.016482 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 17:17:16.016511 | orchestrator | Friday 29 August 2025 17:17:13 +0000 (0:00:00.371) 0:00:48.890 ********* 2025-08-29 17:17:16.016522 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:16.016533 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:16.016544 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.016555 | orchestrator | 2025-08-29 17:17:16.016566 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 17:17:16.016576 | orchestrator | Friday 29 August 2025 17:17:13 +0000 (0:00:00.149) 0:00:49.040 ********* 2025-08-29 17:17:16.016587 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:16.016598 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:16.016609 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.016620 | orchestrator | 2025-08-29 17:17:16.016631 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 17:17:16.016642 | orchestrator | Friday 29 August 2025 17:17:14 +0000 (0:00:00.177) 0:00:49.217 ********* 2025-08-29 17:17:16.016654 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:16.016665 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:16.016675 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.016686 | orchestrator | 2025-08-29 17:17:16.016697 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 17:17:16.016708 | orchestrator | Friday 29 August 2025 17:17:14 +0000 (0:00:00.169) 0:00:49.387 ********* 2025-08-29 17:17:16.016718 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:16.016738 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:16.016749 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.016760 | orchestrator | 2025-08-29 17:17:16.016771 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 17:17:16.016822 | orchestrator | Friday 29 August 2025 17:17:14 +0000 (0:00:00.186) 0:00:49.574 ********* 2025-08-29 17:17:16.016834 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:17:16.016845 | orchestrator | 2025-08-29 17:17:16.016856 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 17:17:16.016867 | orchestrator | Friday 29 August 2025 17:17:14 +0000 (0:00:00.486) 0:00:50.061 ********* 2025-08-29 17:17:16.016878 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:17:16.016889 | orchestrator | 2025-08-29 17:17:16.016900 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 17:17:16.016911 | orchestrator | Friday 29 August 2025 17:17:15 +0000 (0:00:00.503) 0:00:50.564 ********* 2025-08-29 17:17:16.016921 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:17:16.016932 | orchestrator | 2025-08-29 17:17:16.016943 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 17:17:16.016954 | orchestrator | Friday 29 August 2025 17:17:15 +0000 (0:00:00.139) 0:00:50.704 ********* 2025-08-29 17:17:16.016965 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'vg_name': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'}) 2025-08-29 17:17:16.016977 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'vg_name': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'}) 2025-08-29 17:17:16.016988 | orchestrator | 2025-08-29 17:17:16.016998 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 17:17:16.017009 | orchestrator | Friday 29 August 2025 17:17:15 +0000 (0:00:00.169) 0:00:50.874 ********* 2025-08-29 17:17:16.017020 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:16.017031 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:16.017042 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:16.017052 | orchestrator | 2025-08-29 17:17:16.017063 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 17:17:16.017074 | orchestrator | Friday 29 August 2025 17:17:15 +0000 (0:00:00.177) 0:00:51.051 ********* 2025-08-29 17:17:16.017085 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:16.017096 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:16.017113 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:22.316479 | orchestrator | 2025-08-29 17:17:22.316584 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 17:17:22.316601 | orchestrator | Friday 29 August 2025 17:17:16 +0000 (0:00:00.146) 0:00:51.198 ********* 2025-08-29 17:17:22.316614 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'})  2025-08-29 17:17:22.316627 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'})  2025-08-29 17:17:22.316638 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:17:22.316650 | orchestrator | 2025-08-29 17:17:22.316661 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 17:17:22.316672 | orchestrator | Friday 29 August 2025 17:17:16 +0000 (0:00:00.158) 0:00:51.356 ********* 2025-08-29 17:17:22.316708 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 17:17:22.316720 | orchestrator |  "lvm_report": { 2025-08-29 17:17:22.316733 | orchestrator |  "lv": [ 2025-08-29 17:17:22.316745 | orchestrator |  { 2025-08-29 17:17:22.316756 | orchestrator |  "lv_name": "osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f", 2025-08-29 17:17:22.316768 | orchestrator |  "vg_name": "ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f" 2025-08-29 17:17:22.316779 | orchestrator |  }, 2025-08-29 17:17:22.316790 | orchestrator |  { 2025-08-29 17:17:22.316801 | orchestrator |  "lv_name": "osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6", 2025-08-29 17:17:22.316811 | orchestrator |  "vg_name": "ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6" 2025-08-29 17:17:22.316822 | orchestrator |  } 2025-08-29 17:17:22.316833 | orchestrator |  ], 2025-08-29 17:17:22.316844 | orchestrator |  "pv": [ 2025-08-29 17:17:22.316855 | orchestrator |  { 2025-08-29 17:17:22.316865 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 17:17:22.316876 | orchestrator |  "vg_name": "ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6" 2025-08-29 17:17:22.316887 | orchestrator |  }, 2025-08-29 17:17:22.316898 | orchestrator |  { 2025-08-29 17:17:22.316909 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 17:17:22.316920 | orchestrator |  "vg_name": "ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f" 2025-08-29 17:17:22.316930 | orchestrator |  } 2025-08-29 17:17:22.316941 | orchestrator |  ] 2025-08-29 17:17:22.316952 | orchestrator |  } 2025-08-29 17:17:22.316963 | orchestrator | } 2025-08-29 17:17:22.316974 | orchestrator | 2025-08-29 17:17:22.316985 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 17:17:22.316996 | orchestrator | 2025-08-29 17:17:22.317007 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 17:17:22.317020 | orchestrator | Friday 29 August 2025 17:17:16 +0000 (0:00:00.528) 0:00:51.885 ********* 2025-08-29 17:17:22.317032 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 17:17:22.317045 | orchestrator | 2025-08-29 17:17:22.317071 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 17:17:22.317083 | orchestrator | Friday 29 August 2025 17:17:16 +0000 (0:00:00.255) 0:00:52.140 ********* 2025-08-29 17:17:22.317096 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:17:22.317109 | orchestrator | 2025-08-29 17:17:22.317121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:22.317133 | orchestrator | Friday 29 August 2025 17:17:17 +0000 (0:00:00.237) 0:00:52.378 ********* 2025-08-29 17:17:22.317146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-08-29 17:17:22.317158 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-08-29 17:17:22.317170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-08-29 17:17:22.317182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-08-29 17:17:22.317194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-08-29 17:17:22.317206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-08-29 17:17:22.317219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-08-29 17:17:22.317231 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-08-29 17:17:22.317242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-08-29 17:17:22.317255 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-08-29 17:17:22.317266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-08-29 17:17:22.317309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-08-29 17:17:22.317321 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-08-29 17:17:22.317334 | orchestrator | 2025-08-29 17:17:22.317345 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:22.317356 | orchestrator | Friday 29 August 2025 17:17:17 +0000 (0:00:00.436) 0:00:52.815 ********* 2025-08-29 17:17:22.317366 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:22.317382 | orchestrator | 2025-08-29 17:17:22.317393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:22.317404 | orchestrator | Friday 29 August 2025 17:17:17 +0000 (0:00:00.232) 0:00:53.048 ********* 2025-08-29 17:17:22.317414 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:22.317425 | orchestrator | 2025-08-29 17:17:22.317436 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:22.317465 | orchestrator | Friday 29 August 2025 17:17:18 +0000 (0:00:00.217) 0:00:53.265 ********* 2025-08-29 17:17:22.317476 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:22.317487 | orchestrator | 2025-08-29 17:17:22.317498 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:22.317508 | orchestrator | Friday 29 August 2025 17:17:18 +0000 (0:00:00.206) 0:00:53.471 ********* 2025-08-29 17:17:22.317519 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:22.317530 | orchestrator | 2025-08-29 17:17:22.317540 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:22.317551 | orchestrator | Friday 29 August 2025 17:17:18 +0000 (0:00:00.207) 0:00:53.679 ********* 2025-08-29 17:17:22.317562 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:22.317573 | orchestrator | 2025-08-29 17:17:22.317584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:22.317594 | orchestrator | Friday 29 August 2025 17:17:18 +0000 (0:00:00.214) 0:00:53.893 ********* 2025-08-29 17:17:22.317605 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:22.317616 | orchestrator | 2025-08-29 17:17:22.317626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:22.317637 | orchestrator | Friday 29 August 2025 17:17:19 +0000 (0:00:00.641) 0:00:54.535 ********* 2025-08-29 17:17:22.317648 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:22.317658 | orchestrator | 2025-08-29 17:17:22.317669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:22.317680 | orchestrator | Friday 29 August 2025 17:17:19 +0000 (0:00:00.213) 0:00:54.748 ********* 2025-08-29 17:17:22.317690 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:22.317701 | orchestrator | 2025-08-29 17:17:22.317712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:22.317723 | orchestrator | Friday 29 August 2025 17:17:19 +0000 (0:00:00.205) 0:00:54.954 ********* 2025-08-29 17:17:22.317734 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92) 2025-08-29 17:17:22.317746 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92) 2025-08-29 17:17:22.317757 | orchestrator | 2025-08-29 17:17:22.317768 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:22.317778 | orchestrator | Friday 29 August 2025 17:17:20 +0000 (0:00:00.440) 0:00:55.394 ********* 2025-08-29 17:17:22.317789 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4c7a21bb-a31e-489d-9d8e-9d9677eceddb) 2025-08-29 17:17:22.317800 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4c7a21bb-a31e-489d-9d8e-9d9677eceddb) 2025-08-29 17:17:22.317810 | orchestrator | 2025-08-29 17:17:22.317821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:22.317832 | orchestrator | Friday 29 August 2025 17:17:20 +0000 (0:00:00.448) 0:00:55.843 ********* 2025-08-29 17:17:22.317855 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a2504ec8-b4e2-4484-b80f-e6a3be658c85) 2025-08-29 17:17:22.317867 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a2504ec8-b4e2-4484-b80f-e6a3be658c85) 2025-08-29 17:17:22.317877 | orchestrator | 2025-08-29 17:17:22.317889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:22.317899 | orchestrator | Friday 29 August 2025 17:17:21 +0000 (0:00:00.442) 0:00:56.285 ********* 2025-08-29 17:17:22.317910 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f77776fc-d93a-4a3c-99b5-8bf2997ddc70) 2025-08-29 17:17:22.317921 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f77776fc-d93a-4a3c-99b5-8bf2997ddc70) 2025-08-29 17:17:22.317932 | orchestrator | 2025-08-29 17:17:22.317943 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:17:22.317954 | orchestrator | Friday 29 August 2025 17:17:21 +0000 (0:00:00.449) 0:00:56.734 ********* 2025-08-29 17:17:22.317964 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 17:17:22.317975 | orchestrator | 2025-08-29 17:17:22.317985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:22.317996 | orchestrator | Friday 29 August 2025 17:17:21 +0000 (0:00:00.340) 0:00:57.075 ********* 2025-08-29 17:17:22.318007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-08-29 17:17:22.318075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-08-29 17:17:22.318087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-08-29 17:17:22.318098 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-08-29 17:17:22.318109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-08-29 17:17:22.318119 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-08-29 17:17:22.318162 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-08-29 17:17:22.318174 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-08-29 17:17:22.318184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-08-29 17:17:22.318195 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-08-29 17:17:22.318206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-08-29 17:17:22.318224 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-08-29 17:17:31.512370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-08-29 17:17:31.512487 | orchestrator | 2025-08-29 17:17:31.512504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:31.512517 | orchestrator | Friday 29 August 2025 17:17:22 +0000 (0:00:00.418) 0:00:57.493 ********* 2025-08-29 17:17:31.512529 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.512541 | orchestrator | 2025-08-29 17:17:31.512552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:31.512563 | orchestrator | Friday 29 August 2025 17:17:22 +0000 (0:00:00.202) 0:00:57.695 ********* 2025-08-29 17:17:31.512574 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.512584 | orchestrator | 2025-08-29 17:17:31.512595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:31.512606 | orchestrator | Friday 29 August 2025 17:17:22 +0000 (0:00:00.214) 0:00:57.910 ********* 2025-08-29 17:17:31.512617 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.512628 | orchestrator | 2025-08-29 17:17:31.512639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:31.512674 | orchestrator | Friday 29 August 2025 17:17:23 +0000 (0:00:00.640) 0:00:58.551 ********* 2025-08-29 17:17:31.512685 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.512696 | orchestrator | 2025-08-29 17:17:31.512707 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:31.512718 | orchestrator | Friday 29 August 2025 17:17:23 +0000 (0:00:00.219) 0:00:58.771 ********* 2025-08-29 17:17:31.512728 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.512739 | orchestrator | 2025-08-29 17:17:31.512750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:31.512761 | orchestrator | Friday 29 August 2025 17:17:23 +0000 (0:00:00.204) 0:00:58.975 ********* 2025-08-29 17:17:31.512771 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.512782 | orchestrator | 2025-08-29 17:17:31.512793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:31.512803 | orchestrator | Friday 29 August 2025 17:17:24 +0000 (0:00:00.239) 0:00:59.215 ********* 2025-08-29 17:17:31.512814 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.512824 | orchestrator | 2025-08-29 17:17:31.512835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:31.512847 | orchestrator | Friday 29 August 2025 17:17:24 +0000 (0:00:00.209) 0:00:59.425 ********* 2025-08-29 17:17:31.512858 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.512870 | orchestrator | 2025-08-29 17:17:31.512882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:31.512894 | orchestrator | Friday 29 August 2025 17:17:24 +0000 (0:00:00.216) 0:00:59.641 ********* 2025-08-29 17:17:31.512906 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-08-29 17:17:31.512919 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-08-29 17:17:31.512931 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-08-29 17:17:31.512943 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-08-29 17:17:31.512955 | orchestrator | 2025-08-29 17:17:31.512967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:31.512980 | orchestrator | Friday 29 August 2025 17:17:25 +0000 (0:00:00.715) 0:01:00.357 ********* 2025-08-29 17:17:31.512991 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.513003 | orchestrator | 2025-08-29 17:17:31.513015 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:31.513026 | orchestrator | Friday 29 August 2025 17:17:25 +0000 (0:00:00.213) 0:01:00.570 ********* 2025-08-29 17:17:31.513038 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.513050 | orchestrator | 2025-08-29 17:17:31.513062 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:31.513075 | orchestrator | Friday 29 August 2025 17:17:25 +0000 (0:00:00.198) 0:01:00.769 ********* 2025-08-29 17:17:31.513087 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.513099 | orchestrator | 2025-08-29 17:17:31.513111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:17:31.513123 | orchestrator | Friday 29 August 2025 17:17:25 +0000 (0:00:00.202) 0:01:00.971 ********* 2025-08-29 17:17:31.513139 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.513159 | orchestrator | 2025-08-29 17:17:31.513179 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 17:17:31.513197 | orchestrator | Friday 29 August 2025 17:17:25 +0000 (0:00:00.197) 0:01:01.169 ********* 2025-08-29 17:17:31.513216 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.513235 | orchestrator | 2025-08-29 17:17:31.513253 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 17:17:31.513273 | orchestrator | Friday 29 August 2025 17:17:26 +0000 (0:00:00.355) 0:01:01.524 ********* 2025-08-29 17:17:31.513317 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a4c19265-6381-5c6d-bd77-cfabc91aafa2'}}) 2025-08-29 17:17:31.513336 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'}}) 2025-08-29 17:17:31.513367 | orchestrator | 2025-08-29 17:17:31.513379 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 17:17:31.513390 | orchestrator | Friday 29 August 2025 17:17:26 +0000 (0:00:00.203) 0:01:01.727 ********* 2025-08-29 17:17:31.513402 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'}) 2025-08-29 17:17:31.513415 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'}) 2025-08-29 17:17:31.513425 | orchestrator | 2025-08-29 17:17:31.513436 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 17:17:31.513466 | orchestrator | Friday 29 August 2025 17:17:28 +0000 (0:00:01.850) 0:01:03.578 ********* 2025-08-29 17:17:31.513477 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:31.513489 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:31.513500 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.513511 | orchestrator | 2025-08-29 17:17:31.513522 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 17:17:31.513532 | orchestrator | Friday 29 August 2025 17:17:28 +0000 (0:00:00.224) 0:01:03.802 ********* 2025-08-29 17:17:31.513543 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'}) 2025-08-29 17:17:31.513572 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'}) 2025-08-29 17:17:31.513584 | orchestrator | 2025-08-29 17:17:31.513596 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 17:17:31.513606 | orchestrator | Friday 29 August 2025 17:17:29 +0000 (0:00:01.304) 0:01:05.107 ********* 2025-08-29 17:17:31.513617 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:31.513628 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:31.513639 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.513649 | orchestrator | 2025-08-29 17:17:31.513660 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 17:17:31.513671 | orchestrator | Friday 29 August 2025 17:17:30 +0000 (0:00:00.145) 0:01:05.252 ********* 2025-08-29 17:17:31.513682 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.513692 | orchestrator | 2025-08-29 17:17:31.513703 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 17:17:31.513714 | orchestrator | Friday 29 August 2025 17:17:30 +0000 (0:00:00.142) 0:01:05.394 ********* 2025-08-29 17:17:31.513725 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:31.513741 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:31.513752 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.513763 | orchestrator | 2025-08-29 17:17:31.513774 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 17:17:31.513785 | orchestrator | Friday 29 August 2025 17:17:30 +0000 (0:00:00.150) 0:01:05.545 ********* 2025-08-29 17:17:31.513795 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.513813 | orchestrator | 2025-08-29 17:17:31.513824 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 17:17:31.513835 | orchestrator | Friday 29 August 2025 17:17:30 +0000 (0:00:00.152) 0:01:05.697 ********* 2025-08-29 17:17:31.513845 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:31.513856 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:31.513867 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.513878 | orchestrator | 2025-08-29 17:17:31.513889 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 17:17:31.513900 | orchestrator | Friday 29 August 2025 17:17:30 +0000 (0:00:00.184) 0:01:05.882 ********* 2025-08-29 17:17:31.513910 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.513921 | orchestrator | 2025-08-29 17:17:31.513932 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 17:17:31.513943 | orchestrator | Friday 29 August 2025 17:17:30 +0000 (0:00:00.143) 0:01:06.025 ********* 2025-08-29 17:17:31.513954 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:31.513965 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:31.513976 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:31.513986 | orchestrator | 2025-08-29 17:17:31.513997 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 17:17:31.514008 | orchestrator | Friday 29 August 2025 17:17:30 +0000 (0:00:00.160) 0:01:06.185 ********* 2025-08-29 17:17:31.514082 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:17:31.514094 | orchestrator | 2025-08-29 17:17:31.514105 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 17:17:31.514116 | orchestrator | Friday 29 August 2025 17:17:31 +0000 (0:00:00.367) 0:01:06.553 ********* 2025-08-29 17:17:31.514134 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:37.955763 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:37.955872 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.955888 | orchestrator | 2025-08-29 17:17:37.955902 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 17:17:37.955915 | orchestrator | Friday 29 August 2025 17:17:31 +0000 (0:00:00.144) 0:01:06.697 ********* 2025-08-29 17:17:37.955926 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:37.955938 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:37.955949 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.955961 | orchestrator | 2025-08-29 17:17:37.955973 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 17:17:37.955984 | orchestrator | Friday 29 August 2025 17:17:31 +0000 (0:00:00.161) 0:01:06.859 ********* 2025-08-29 17:17:37.955995 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:37.956006 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:37.956017 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.956054 | orchestrator | 2025-08-29 17:17:37.956065 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 17:17:37.956076 | orchestrator | Friday 29 August 2025 17:17:31 +0000 (0:00:00.165) 0:01:07.024 ********* 2025-08-29 17:17:37.956087 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.956098 | orchestrator | 2025-08-29 17:17:37.956109 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 17:17:37.956119 | orchestrator | Friday 29 August 2025 17:17:31 +0000 (0:00:00.149) 0:01:07.173 ********* 2025-08-29 17:17:37.956130 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.956141 | orchestrator | 2025-08-29 17:17:37.956151 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 17:17:37.956162 | orchestrator | Friday 29 August 2025 17:17:32 +0000 (0:00:00.151) 0:01:07.325 ********* 2025-08-29 17:17:37.956173 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.956183 | orchestrator | 2025-08-29 17:17:37.956194 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 17:17:37.956221 | orchestrator | Friday 29 August 2025 17:17:32 +0000 (0:00:00.145) 0:01:07.470 ********* 2025-08-29 17:17:37.956232 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 17:17:37.956244 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 17:17:37.956255 | orchestrator | } 2025-08-29 17:17:37.956266 | orchestrator | 2025-08-29 17:17:37.956277 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 17:17:37.956317 | orchestrator | Friday 29 August 2025 17:17:32 +0000 (0:00:00.145) 0:01:07.616 ********* 2025-08-29 17:17:37.956330 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 17:17:37.956343 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 17:17:37.956355 | orchestrator | } 2025-08-29 17:17:37.956367 | orchestrator | 2025-08-29 17:17:37.956378 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 17:17:37.956390 | orchestrator | Friday 29 August 2025 17:17:32 +0000 (0:00:00.170) 0:01:07.786 ********* 2025-08-29 17:17:37.956402 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 17:17:37.956414 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 17:17:37.956426 | orchestrator | } 2025-08-29 17:17:37.956438 | orchestrator | 2025-08-29 17:17:37.956450 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 17:17:37.956462 | orchestrator | Friday 29 August 2025 17:17:32 +0000 (0:00:00.151) 0:01:07.937 ********* 2025-08-29 17:17:37.956474 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:17:37.956501 | orchestrator | 2025-08-29 17:17:37.956513 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 17:17:37.956537 | orchestrator | Friday 29 August 2025 17:17:33 +0000 (0:00:00.514) 0:01:08.452 ********* 2025-08-29 17:17:37.956549 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:17:37.956561 | orchestrator | 2025-08-29 17:17:37.956573 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 17:17:37.956585 | orchestrator | Friday 29 August 2025 17:17:33 +0000 (0:00:00.551) 0:01:09.003 ********* 2025-08-29 17:17:37.956597 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:17:37.956609 | orchestrator | 2025-08-29 17:17:37.956621 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 17:17:37.956633 | orchestrator | Friday 29 August 2025 17:17:34 +0000 (0:00:00.744) 0:01:09.747 ********* 2025-08-29 17:17:37.956645 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:17:37.956656 | orchestrator | 2025-08-29 17:17:37.956667 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 17:17:37.956678 | orchestrator | Friday 29 August 2025 17:17:34 +0000 (0:00:00.158) 0:01:09.906 ********* 2025-08-29 17:17:37.956689 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.956700 | orchestrator | 2025-08-29 17:17:37.956710 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 17:17:37.956721 | orchestrator | Friday 29 August 2025 17:17:34 +0000 (0:00:00.120) 0:01:10.027 ********* 2025-08-29 17:17:37.956741 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.956752 | orchestrator | 2025-08-29 17:17:37.956763 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 17:17:37.956773 | orchestrator | Friday 29 August 2025 17:17:34 +0000 (0:00:00.128) 0:01:10.155 ********* 2025-08-29 17:17:37.956784 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 17:17:37.956795 | orchestrator |  "vgs_report": { 2025-08-29 17:17:37.956807 | orchestrator |  "vg": [] 2025-08-29 17:17:37.956835 | orchestrator |  } 2025-08-29 17:17:37.956847 | orchestrator | } 2025-08-29 17:17:37.956858 | orchestrator | 2025-08-29 17:17:37.956869 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 17:17:37.956880 | orchestrator | Friday 29 August 2025 17:17:35 +0000 (0:00:00.144) 0:01:10.300 ********* 2025-08-29 17:17:37.956890 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.956901 | orchestrator | 2025-08-29 17:17:37.956912 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 17:17:37.956923 | orchestrator | Friday 29 August 2025 17:17:35 +0000 (0:00:00.151) 0:01:10.451 ********* 2025-08-29 17:17:37.956934 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.956944 | orchestrator | 2025-08-29 17:17:37.956955 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 17:17:37.956966 | orchestrator | Friday 29 August 2025 17:17:35 +0000 (0:00:00.139) 0:01:10.591 ********* 2025-08-29 17:17:37.956976 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.956987 | orchestrator | 2025-08-29 17:17:37.956998 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 17:17:37.957009 | orchestrator | Friday 29 August 2025 17:17:35 +0000 (0:00:00.143) 0:01:10.735 ********* 2025-08-29 17:17:37.957020 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.957030 | orchestrator | 2025-08-29 17:17:37.957041 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 17:17:37.957052 | orchestrator | Friday 29 August 2025 17:17:35 +0000 (0:00:00.152) 0:01:10.887 ********* 2025-08-29 17:17:37.957063 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.957073 | orchestrator | 2025-08-29 17:17:37.957084 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 17:17:37.957095 | orchestrator | Friday 29 August 2025 17:17:35 +0000 (0:00:00.147) 0:01:11.034 ********* 2025-08-29 17:17:37.957106 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.957116 | orchestrator | 2025-08-29 17:17:37.957127 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 17:17:37.957138 | orchestrator | Friday 29 August 2025 17:17:36 +0000 (0:00:00.180) 0:01:11.215 ********* 2025-08-29 17:17:37.957149 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.957160 | orchestrator | 2025-08-29 17:17:37.957171 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 17:17:37.957181 | orchestrator | Friday 29 August 2025 17:17:36 +0000 (0:00:00.157) 0:01:11.373 ********* 2025-08-29 17:17:37.957192 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.957203 | orchestrator | 2025-08-29 17:17:37.957214 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 17:17:37.957225 | orchestrator | Friday 29 August 2025 17:17:36 +0000 (0:00:00.144) 0:01:11.518 ********* 2025-08-29 17:17:37.957235 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.957246 | orchestrator | 2025-08-29 17:17:37.957257 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 17:17:37.957273 | orchestrator | Friday 29 August 2025 17:17:36 +0000 (0:00:00.362) 0:01:11.880 ********* 2025-08-29 17:17:37.957285 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.957312 | orchestrator | 2025-08-29 17:17:37.957323 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 17:17:37.957334 | orchestrator | Friday 29 August 2025 17:17:36 +0000 (0:00:00.150) 0:01:12.031 ********* 2025-08-29 17:17:37.957345 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.957362 | orchestrator | 2025-08-29 17:17:37.957373 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 17:17:37.957384 | orchestrator | Friday 29 August 2025 17:17:36 +0000 (0:00:00.142) 0:01:12.174 ********* 2025-08-29 17:17:37.957394 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.957405 | orchestrator | 2025-08-29 17:17:37.957416 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 17:17:37.957427 | orchestrator | Friday 29 August 2025 17:17:37 +0000 (0:00:00.175) 0:01:12.349 ********* 2025-08-29 17:17:37.957438 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.957448 | orchestrator | 2025-08-29 17:17:37.957459 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 17:17:37.957470 | orchestrator | Friday 29 August 2025 17:17:37 +0000 (0:00:00.141) 0:01:12.490 ********* 2025-08-29 17:17:37.957480 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.957491 | orchestrator | 2025-08-29 17:17:37.957502 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 17:17:37.957513 | orchestrator | Friday 29 August 2025 17:17:37 +0000 (0:00:00.149) 0:01:12.639 ********* 2025-08-29 17:17:37.957524 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:37.957535 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:37.957546 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.957557 | orchestrator | 2025-08-29 17:17:37.957568 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 17:17:37.957579 | orchestrator | Friday 29 August 2025 17:17:37 +0000 (0:00:00.164) 0:01:12.804 ********* 2025-08-29 17:17:37.957590 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:37.957601 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:37.957612 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:37.957623 | orchestrator | 2025-08-29 17:17:37.957633 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 17:17:37.957644 | orchestrator | Friday 29 August 2025 17:17:37 +0000 (0:00:00.167) 0:01:12.971 ********* 2025-08-29 17:17:37.957663 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:41.072535 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:41.072628 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:41.072641 | orchestrator | 2025-08-29 17:17:41.072655 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 17:17:41.072667 | orchestrator | Friday 29 August 2025 17:17:37 +0000 (0:00:00.170) 0:01:13.142 ********* 2025-08-29 17:17:41.072679 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:41.072690 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:41.072701 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:41.072712 | orchestrator | 2025-08-29 17:17:41.072723 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 17:17:41.072734 | orchestrator | Friday 29 August 2025 17:17:38 +0000 (0:00:00.150) 0:01:13.292 ********* 2025-08-29 17:17:41.072751 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:41.072811 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:41.072833 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:41.072851 | orchestrator | 2025-08-29 17:17:41.072868 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 17:17:41.072885 | orchestrator | Friday 29 August 2025 17:17:38 +0000 (0:00:00.167) 0:01:13.460 ********* 2025-08-29 17:17:41.072903 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:41.072922 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:41.072941 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:41.072959 | orchestrator | 2025-08-29 17:17:41.072978 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 17:17:41.072989 | orchestrator | Friday 29 August 2025 17:17:38 +0000 (0:00:00.150) 0:01:13.611 ********* 2025-08-29 17:17:41.073000 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:41.073011 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:41.073022 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:41.073033 | orchestrator | 2025-08-29 17:17:41.073043 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 17:17:41.073055 | orchestrator | Friday 29 August 2025 17:17:38 +0000 (0:00:00.380) 0:01:13.992 ********* 2025-08-29 17:17:41.073066 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:41.073078 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:41.073091 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:41.073103 | orchestrator | 2025-08-29 17:17:41.073116 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 17:17:41.073129 | orchestrator | Friday 29 August 2025 17:17:38 +0000 (0:00:00.157) 0:01:14.149 ********* 2025-08-29 17:17:41.073142 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:17:41.073155 | orchestrator | 2025-08-29 17:17:41.073167 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 17:17:41.073180 | orchestrator | Friday 29 August 2025 17:17:39 +0000 (0:00:00.519) 0:01:14.668 ********* 2025-08-29 17:17:41.073192 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:17:41.073204 | orchestrator | 2025-08-29 17:17:41.073216 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 17:17:41.073229 | orchestrator | Friday 29 August 2025 17:17:39 +0000 (0:00:00.514) 0:01:15.183 ********* 2025-08-29 17:17:41.073242 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:17:41.073254 | orchestrator | 2025-08-29 17:17:41.073266 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 17:17:41.073279 | orchestrator | Friday 29 August 2025 17:17:40 +0000 (0:00:00.155) 0:01:15.338 ********* 2025-08-29 17:17:41.073315 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'vg_name': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'}) 2025-08-29 17:17:41.073330 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'vg_name': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'}) 2025-08-29 17:17:41.073342 | orchestrator | 2025-08-29 17:17:41.073353 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 17:17:41.073375 | orchestrator | Friday 29 August 2025 17:17:40 +0000 (0:00:00.197) 0:01:15.536 ********* 2025-08-29 17:17:41.073404 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:41.073416 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:41.073427 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:41.073437 | orchestrator | 2025-08-29 17:17:41.073448 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 17:17:41.073459 | orchestrator | Friday 29 August 2025 17:17:40 +0000 (0:00:00.178) 0:01:15.714 ********* 2025-08-29 17:17:41.073470 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:41.073481 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:41.073493 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:41.073503 | orchestrator | 2025-08-29 17:17:41.073514 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 17:17:41.073525 | orchestrator | Friday 29 August 2025 17:17:40 +0000 (0:00:00.180) 0:01:15.895 ********* 2025-08-29 17:17:41.073536 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'})  2025-08-29 17:17:41.073564 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'})  2025-08-29 17:17:41.073576 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:17:41.073587 | orchestrator | 2025-08-29 17:17:41.073598 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 17:17:41.073608 | orchestrator | Friday 29 August 2025 17:17:40 +0000 (0:00:00.179) 0:01:16.074 ********* 2025-08-29 17:17:41.073619 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 17:17:41.073630 | orchestrator |  "lvm_report": { 2025-08-29 17:17:41.073642 | orchestrator |  "lv": [ 2025-08-29 17:17:41.073652 | orchestrator |  { 2025-08-29 17:17:41.073663 | orchestrator |  "lv_name": "osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2", 2025-08-29 17:17:41.073680 | orchestrator |  "vg_name": "ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2" 2025-08-29 17:17:41.073691 | orchestrator |  }, 2025-08-29 17:17:41.073702 | orchestrator |  { 2025-08-29 17:17:41.073713 | orchestrator |  "lv_name": "osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591", 2025-08-29 17:17:41.073723 | orchestrator |  "vg_name": "ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591" 2025-08-29 17:17:41.073734 | orchestrator |  } 2025-08-29 17:17:41.073745 | orchestrator |  ], 2025-08-29 17:17:41.073756 | orchestrator |  "pv": [ 2025-08-29 17:17:41.073766 | orchestrator |  { 2025-08-29 17:17:41.073777 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 17:17:41.073788 | orchestrator |  "vg_name": "ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2" 2025-08-29 17:17:41.073798 | orchestrator |  }, 2025-08-29 17:17:41.073809 | orchestrator |  { 2025-08-29 17:17:41.073820 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 17:17:41.073830 | orchestrator |  "vg_name": "ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591" 2025-08-29 17:17:41.073841 | orchestrator |  } 2025-08-29 17:17:41.073852 | orchestrator |  ] 2025-08-29 17:17:41.073862 | orchestrator |  } 2025-08-29 17:17:41.073873 | orchestrator | } 2025-08-29 17:17:41.073885 | orchestrator | 2025-08-29 17:17:41.073896 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:17:41.073913 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 17:17:41.073924 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 17:17:41.073936 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 17:17:41.073947 | orchestrator | 2025-08-29 17:17:41.073957 | orchestrator | 2025-08-29 17:17:41.073968 | orchestrator | 2025-08-29 17:17:41.073979 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:17:41.073990 | orchestrator | Friday 29 August 2025 17:17:41 +0000 (0:00:00.159) 0:01:16.234 ********* 2025-08-29 17:17:41.074001 | orchestrator | =============================================================================== 2025-08-29 17:17:41.074011 | orchestrator | Create block VGs -------------------------------------------------------- 5.66s 2025-08-29 17:17:41.074084 | orchestrator | Create block LVs -------------------------------------------------------- 4.02s 2025-08-29 17:17:41.074095 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.90s 2025-08-29 17:17:41.074106 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.75s 2025-08-29 17:17:41.074117 | orchestrator | Add known partitions to the list of available block devices ------------- 1.66s 2025-08-29 17:17:41.074127 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.60s 2025-08-29 17:17:41.074138 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.54s 2025-08-29 17:17:41.074149 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.53s 2025-08-29 17:17:41.074167 | orchestrator | Add known links to the list of available block devices ------------------ 1.40s 2025-08-29 17:17:41.480289 | orchestrator | Add known partitions to the list of available block devices ------------- 1.26s 2025-08-29 17:17:41.480416 | orchestrator | Print LVM report data --------------------------------------------------- 0.98s 2025-08-29 17:17:41.480429 | orchestrator | Add known links to the list of available block devices ------------------ 0.92s 2025-08-29 17:17:41.480441 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2025-08-29 17:17:41.480452 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.82s 2025-08-29 17:17:41.480462 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.81s 2025-08-29 17:17:41.480473 | orchestrator | Get initial list of available block devices ----------------------------- 0.78s 2025-08-29 17:17:41.480484 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.75s 2025-08-29 17:17:41.480494 | orchestrator | Print size needed for LVs on ceph_wal_devices --------------------------- 0.72s 2025-08-29 17:17:41.480505 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.72s 2025-08-29 17:17:41.480515 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.72s 2025-08-29 17:17:53.868796 | orchestrator | 2025-08-29 17:17:53 | INFO  | Task 6eca3a88-74b2-48d8-a1f6-13a7d7cb3b03 (facts) was prepared for execution. 2025-08-29 17:17:53.868903 | orchestrator | 2025-08-29 17:17:53 | INFO  | It takes a moment until task 6eca3a88-74b2-48d8-a1f6-13a7d7cb3b03 (facts) has been started and output is visible here. 2025-08-29 17:18:08.232934 | orchestrator | 2025-08-29 17:18:08.233048 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 17:18:08.233067 | orchestrator | 2025-08-29 17:18:08.233079 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 17:18:08.233091 | orchestrator | Friday 29 August 2025 17:17:58 +0000 (0:00:00.274) 0:00:00.274 ********* 2025-08-29 17:18:08.233103 | orchestrator | ok: [testbed-manager] 2025-08-29 17:18:08.233115 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:18:08.233154 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:18:08.233165 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:18:08.233176 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:18:08.233187 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:18:08.233197 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:18:08.233208 | orchestrator | 2025-08-29 17:18:08.233219 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 17:18:08.233230 | orchestrator | Friday 29 August 2025 17:17:59 +0000 (0:00:01.087) 0:00:01.361 ********* 2025-08-29 17:18:08.233256 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:18:08.233268 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:18:08.233280 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:18:08.233291 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:18:08.233365 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:18:08.233379 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:18:08.233390 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:18:08.233406 | orchestrator | 2025-08-29 17:18:08.233423 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 17:18:08.233441 | orchestrator | 2025-08-29 17:18:08.233459 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 17:18:08.233474 | orchestrator | Friday 29 August 2025 17:18:00 +0000 (0:00:01.365) 0:00:02.727 ********* 2025-08-29 17:18:08.233487 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:18:08.233499 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:18:08.233511 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:18:08.233524 | orchestrator | ok: [testbed-manager] 2025-08-29 17:18:08.233536 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:18:08.233549 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:18:08.233561 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:18:08.233573 | orchestrator | 2025-08-29 17:18:08.233587 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 17:18:08.233599 | orchestrator | 2025-08-29 17:18:08.233611 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 17:18:08.233624 | orchestrator | Friday 29 August 2025 17:18:07 +0000 (0:00:06.667) 0:00:09.394 ********* 2025-08-29 17:18:08.233637 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:18:08.233649 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:18:08.233662 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:18:08.233674 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:18:08.233686 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:18:08.233698 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:18:08.233710 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:18:08.233722 | orchestrator | 2025-08-29 17:18:08.233734 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:18:08.233748 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:18:08.233762 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:18:08.233775 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:18:08.233788 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:18:08.233801 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:18:08.233814 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:18:08.233827 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:18:08.233849 | orchestrator | 2025-08-29 17:18:08.233860 | orchestrator | 2025-08-29 17:18:08.233871 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:18:08.233882 | orchestrator | Friday 29 August 2025 17:18:07 +0000 (0:00:00.611) 0:00:10.006 ********* 2025-08-29 17:18:08.233893 | orchestrator | =============================================================================== 2025-08-29 17:18:08.233904 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.67s 2025-08-29 17:18:08.233915 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.37s 2025-08-29 17:18:08.233926 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2025-08-29 17:18:08.233937 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2025-08-29 17:18:20.796970 | orchestrator | 2025-08-29 17:18:20 | INFO  | Task 2cf638e4-493c-4f67-8aa2-6d7926a5af07 (frr) was prepared for execution. 2025-08-29 17:18:20.797159 | orchestrator | 2025-08-29 17:18:20 | INFO  | It takes a moment until task 2cf638e4-493c-4f67-8aa2-6d7926a5af07 (frr) has been started and output is visible here. 2025-08-29 17:18:47.918090 | orchestrator | 2025-08-29 17:18:47.918214 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-08-29 17:18:47.918231 | orchestrator | 2025-08-29 17:18:47.918244 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-08-29 17:18:47.918257 | orchestrator | Friday 29 August 2025 17:18:25 +0000 (0:00:00.255) 0:00:00.255 ********* 2025-08-29 17:18:47.918269 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 17:18:47.918282 | orchestrator | 2025-08-29 17:18:47.918293 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-08-29 17:18:47.918304 | orchestrator | Friday 29 August 2025 17:18:25 +0000 (0:00:00.227) 0:00:00.483 ********* 2025-08-29 17:18:47.918315 | orchestrator | changed: [testbed-manager] 2025-08-29 17:18:47.918394 | orchestrator | 2025-08-29 17:18:47.918406 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-08-29 17:18:47.918417 | orchestrator | Friday 29 August 2025 17:18:26 +0000 (0:00:01.174) 0:00:01.657 ********* 2025-08-29 17:18:47.918428 | orchestrator | changed: [testbed-manager] 2025-08-29 17:18:47.918439 | orchestrator | 2025-08-29 17:18:47.918468 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-08-29 17:18:47.918479 | orchestrator | Friday 29 August 2025 17:18:37 +0000 (0:00:10.378) 0:00:12.036 ********* 2025-08-29 17:18:47.918490 | orchestrator | ok: [testbed-manager] 2025-08-29 17:18:47.918502 | orchestrator | 2025-08-29 17:18:47.918513 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-08-29 17:18:47.918524 | orchestrator | Friday 29 August 2025 17:18:38 +0000 (0:00:01.298) 0:00:13.334 ********* 2025-08-29 17:18:47.918535 | orchestrator | changed: [testbed-manager] 2025-08-29 17:18:47.918546 | orchestrator | 2025-08-29 17:18:47.918557 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-08-29 17:18:47.918568 | orchestrator | Friday 29 August 2025 17:18:39 +0000 (0:00:00.979) 0:00:14.313 ********* 2025-08-29 17:18:47.918580 | orchestrator | ok: [testbed-manager] 2025-08-29 17:18:47.918592 | orchestrator | 2025-08-29 17:18:47.918605 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-08-29 17:18:47.918618 | orchestrator | Friday 29 August 2025 17:18:40 +0000 (0:00:01.184) 0:00:15.498 ********* 2025-08-29 17:18:47.918630 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:18:47.918642 | orchestrator | 2025-08-29 17:18:47.918654 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-08-29 17:18:47.918666 | orchestrator | Friday 29 August 2025 17:18:41 +0000 (0:00:00.800) 0:00:16.299 ********* 2025-08-29 17:18:47.918678 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:18:47.918690 | orchestrator | 2025-08-29 17:18:47.918703 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-08-29 17:18:47.918738 | orchestrator | Friday 29 August 2025 17:18:41 +0000 (0:00:00.168) 0:00:16.467 ********* 2025-08-29 17:18:47.918751 | orchestrator | changed: [testbed-manager] 2025-08-29 17:18:47.918763 | orchestrator | 2025-08-29 17:18:47.918775 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-08-29 17:18:47.918787 | orchestrator | Friday 29 August 2025 17:18:42 +0000 (0:00:01.025) 0:00:17.493 ********* 2025-08-29 17:18:47.918799 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-08-29 17:18:47.918812 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-08-29 17:18:47.918825 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-08-29 17:18:47.918837 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-08-29 17:18:47.918850 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-08-29 17:18:47.918861 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-08-29 17:18:47.918873 | orchestrator | 2025-08-29 17:18:47.918885 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-08-29 17:18:47.918897 | orchestrator | Friday 29 August 2025 17:18:44 +0000 (0:00:02.255) 0:00:19.749 ********* 2025-08-29 17:18:47.918909 | orchestrator | ok: [testbed-manager] 2025-08-29 17:18:47.918921 | orchestrator | 2025-08-29 17:18:47.918932 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-08-29 17:18:47.918943 | orchestrator | Friday 29 August 2025 17:18:46 +0000 (0:00:01.460) 0:00:21.209 ********* 2025-08-29 17:18:47.918953 | orchestrator | changed: [testbed-manager] 2025-08-29 17:18:47.918964 | orchestrator | 2025-08-29 17:18:47.918975 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:18:47.918986 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:18:47.918997 | orchestrator | 2025-08-29 17:18:47.919008 | orchestrator | 2025-08-29 17:18:47.919019 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:18:47.919029 | orchestrator | Friday 29 August 2025 17:18:47 +0000 (0:00:01.444) 0:00:22.653 ********* 2025-08-29 17:18:47.919040 | orchestrator | =============================================================================== 2025-08-29 17:18:47.919051 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.38s 2025-08-29 17:18:47.919062 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.26s 2025-08-29 17:18:47.919072 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.46s 2025-08-29 17:18:47.919083 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.44s 2025-08-29 17:18:47.919111 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.30s 2025-08-29 17:18:47.919123 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.18s 2025-08-29 17:18:47.919133 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.17s 2025-08-29 17:18:47.919144 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 1.03s 2025-08-29 17:18:47.919155 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.98s 2025-08-29 17:18:47.919166 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.80s 2025-08-29 17:18:47.919177 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2025-08-29 17:18:47.919187 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.17s 2025-08-29 17:18:48.231100 | orchestrator | 2025-08-29 17:18:48.235497 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Aug 29 17:18:48 UTC 2025 2025-08-29 17:18:48.235583 | orchestrator | 2025-08-29 17:18:50.206573 | orchestrator | 2025-08-29 17:18:50 | INFO  | Collection nutshell is prepared for execution 2025-08-29 17:18:50.206652 | orchestrator | 2025-08-29 17:18:50 | INFO  | D [0] - dotfiles 2025-08-29 17:19:00.348448 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [0] - homer 2025-08-29 17:19:00.348539 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [0] - netdata 2025-08-29 17:19:00.348549 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [0] - openstackclient 2025-08-29 17:19:00.348556 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [0] - phpmyadmin 2025-08-29 17:19:00.348563 | orchestrator | 2025-08-29 17:19:00 | INFO  | A [0] - common 2025-08-29 17:19:00.351631 | orchestrator | 2025-08-29 17:19:00 | INFO  | A [1] -- loadbalancer 2025-08-29 17:19:00.351930 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [2] --- opensearch 2025-08-29 17:19:00.352563 | orchestrator | 2025-08-29 17:19:00 | INFO  | A [2] --- mariadb-ng 2025-08-29 17:19:00.353522 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [3] ---- horizon 2025-08-29 17:19:00.353549 | orchestrator | 2025-08-29 17:19:00 | INFO  | A [3] ---- keystone 2025-08-29 17:19:00.353790 | orchestrator | 2025-08-29 17:19:00 | INFO  | A [4] ----- neutron 2025-08-29 17:19:00.354400 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [5] ------ wait-for-nova 2025-08-29 17:19:00.354424 | orchestrator | 2025-08-29 17:19:00 | INFO  | A [5] ------ octavia 2025-08-29 17:19:00.355969 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [4] ----- barbican 2025-08-29 17:19:00.356096 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [4] ----- designate 2025-08-29 17:19:00.356938 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [4] ----- ironic 2025-08-29 17:19:00.356960 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [4] ----- placement 2025-08-29 17:19:00.356973 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [4] ----- magnum 2025-08-29 17:19:00.357811 | orchestrator | 2025-08-29 17:19:00 | INFO  | A [1] -- openvswitch 2025-08-29 17:19:00.357832 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [2] --- ovn 2025-08-29 17:19:00.358526 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [1] -- memcached 2025-08-29 17:19:00.358549 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [1] -- redis 2025-08-29 17:19:00.358562 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [1] -- rabbitmq-ng 2025-08-29 17:19:00.358662 | orchestrator | 2025-08-29 17:19:00 | INFO  | A [0] - kubernetes 2025-08-29 17:19:00.362485 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [1] -- kubeconfig 2025-08-29 17:19:00.362507 | orchestrator | 2025-08-29 17:19:00 | INFO  | A [1] -- copy-kubeconfig 2025-08-29 17:19:00.362702 | orchestrator | 2025-08-29 17:19:00 | INFO  | A [0] - ceph 2025-08-29 17:19:00.365485 | orchestrator | 2025-08-29 17:19:00 | INFO  | A [1] -- ceph-pools 2025-08-29 17:19:00.365869 | orchestrator | 2025-08-29 17:19:00 | INFO  | A [2] --- copy-ceph-keys 2025-08-29 17:19:00.365890 | orchestrator | 2025-08-29 17:19:00 | INFO  | A [3] ---- cephclient 2025-08-29 17:19:00.366640 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-08-29 17:19:00.366664 | orchestrator | 2025-08-29 17:19:00 | INFO  | A [4] ----- wait-for-keystone 2025-08-29 17:19:00.366677 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [5] ------ kolla-ceph-rgw 2025-08-29 17:19:00.366864 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [5] ------ glance 2025-08-29 17:19:00.366885 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [5] ------ cinder 2025-08-29 17:19:00.366897 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [5] ------ nova 2025-08-29 17:19:00.367496 | orchestrator | 2025-08-29 17:19:00 | INFO  | A [4] ----- prometheus 2025-08-29 17:19:00.368813 | orchestrator | 2025-08-29 17:19:00 | INFO  | D [5] ------ grafana 2025-08-29 17:19:00.575097 | orchestrator | 2025-08-29 17:19:00 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-08-29 17:19:00.575209 | orchestrator | 2025-08-29 17:19:00 | INFO  | Tasks are running in the background 2025-08-29 17:19:03.933721 | orchestrator | 2025-08-29 17:19:03 | INFO  | No task IDs specified, wait for all currently running tasks 2025-08-29 17:19:06.066852 | orchestrator | 2025-08-29 17:19:06 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:06.068276 | orchestrator | 2025-08-29 17:19:06 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:06.069192 | orchestrator | 2025-08-29 17:19:06 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:06.071708 | orchestrator | 2025-08-29 17:19:06 | INFO  | Task 8873a640-f4f1-4580-89ae-521462ea5425 is in state STARTED 2025-08-29 17:19:06.072541 | orchestrator | 2025-08-29 17:19:06 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:06.073420 | orchestrator | 2025-08-29 17:19:06 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:06.075120 | orchestrator | 2025-08-29 17:19:06 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:06.075173 | orchestrator | 2025-08-29 17:19:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:09.125175 | orchestrator | 2025-08-29 17:19:09 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:09.126200 | orchestrator | 2025-08-29 17:19:09 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:09.129763 | orchestrator | 2025-08-29 17:19:09 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:09.130120 | orchestrator | 2025-08-29 17:19:09 | INFO  | Task 8873a640-f4f1-4580-89ae-521462ea5425 is in state STARTED 2025-08-29 17:19:09.135588 | orchestrator | 2025-08-29 17:19:09 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:09.137631 | orchestrator | 2025-08-29 17:19:09 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:09.138376 | orchestrator | 2025-08-29 17:19:09 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:09.138461 | orchestrator | 2025-08-29 17:19:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:12.191301 | orchestrator | 2025-08-29 17:19:12 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:12.191452 | orchestrator | 2025-08-29 17:19:12 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:12.193060 | orchestrator | 2025-08-29 17:19:12 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:12.194934 | orchestrator | 2025-08-29 17:19:12 | INFO  | Task 8873a640-f4f1-4580-89ae-521462ea5425 is in state STARTED 2025-08-29 17:19:12.195553 | orchestrator | 2025-08-29 17:19:12 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:12.197598 | orchestrator | 2025-08-29 17:19:12 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:12.198340 | orchestrator | 2025-08-29 17:19:12 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:12.198364 | orchestrator | 2025-08-29 17:19:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:15.239101 | orchestrator | 2025-08-29 17:19:15 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:15.242149 | orchestrator | 2025-08-29 17:19:15 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:15.244582 | orchestrator | 2025-08-29 17:19:15 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:15.244753 | orchestrator | 2025-08-29 17:19:15 | INFO  | Task 8873a640-f4f1-4580-89ae-521462ea5425 is in state STARTED 2025-08-29 17:19:15.247211 | orchestrator | 2025-08-29 17:19:15 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:15.247842 | orchestrator | 2025-08-29 17:19:15 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:15.249885 | orchestrator | 2025-08-29 17:19:15 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:15.249908 | orchestrator | 2025-08-29 17:19:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:18.539105 | orchestrator | 2025-08-29 17:19:18 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:18.539214 | orchestrator | 2025-08-29 17:19:18 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:18.539230 | orchestrator | 2025-08-29 17:19:18 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:18.539243 | orchestrator | 2025-08-29 17:19:18 | INFO  | Task 8873a640-f4f1-4580-89ae-521462ea5425 is in state STARTED 2025-08-29 17:19:18.539254 | orchestrator | 2025-08-29 17:19:18 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:18.539265 | orchestrator | 2025-08-29 17:19:18 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:18.541165 | orchestrator | 2025-08-29 17:19:18 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:18.541203 | orchestrator | 2025-08-29 17:19:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:21.874354 | orchestrator | 2025-08-29 17:19:21 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:21.874472 | orchestrator | 2025-08-29 17:19:21 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:21.874488 | orchestrator | 2025-08-29 17:19:21 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:21.874499 | orchestrator | 2025-08-29 17:19:21 | INFO  | Task 8873a640-f4f1-4580-89ae-521462ea5425 is in state STARTED 2025-08-29 17:19:21.874510 | orchestrator | 2025-08-29 17:19:21 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:21.874521 | orchestrator | 2025-08-29 17:19:21 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:21.874532 | orchestrator | 2025-08-29 17:19:21 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:21.874543 | orchestrator | 2025-08-29 17:19:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:24.909043 | orchestrator | 2025-08-29 17:19:24 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:24.909156 | orchestrator | 2025-08-29 17:19:24 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:24.909597 | orchestrator | 2025-08-29 17:19:24 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:24.910436 | orchestrator | 2025-08-29 17:19:24 | INFO  | Task 8873a640-f4f1-4580-89ae-521462ea5425 is in state STARTED 2025-08-29 17:19:24.913487 | orchestrator | 2025-08-29 17:19:24 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:24.913864 | orchestrator | 2025-08-29 17:19:24 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:24.914719 | orchestrator | 2025-08-29 17:19:24 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:24.914752 | orchestrator | 2025-08-29 17:19:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:27.976776 | orchestrator | 2025-08-29 17:19:27 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:27.976877 | orchestrator | 2025-08-29 17:19:27 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:27.978800 | orchestrator | 2025-08-29 17:19:27 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:27.979119 | orchestrator | 2025-08-29 17:19:27 | INFO  | Task 8873a640-f4f1-4580-89ae-521462ea5425 is in state STARTED 2025-08-29 17:19:27.982110 | orchestrator | 2025-08-29 17:19:27 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:27.982714 | orchestrator | 2025-08-29 17:19:27 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:27.983198 | orchestrator | 2025-08-29 17:19:27 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:27.983383 | orchestrator | 2025-08-29 17:19:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:31.063774 | orchestrator | 2025-08-29 17:19:31 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:31.065842 | orchestrator | 2025-08-29 17:19:31 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:31.069473 | orchestrator | 2025-08-29 17:19:31 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:31.073237 | orchestrator | 2025-08-29 17:19:31 | INFO  | Task 8873a640-f4f1-4580-89ae-521462ea5425 is in state STARTED 2025-08-29 17:19:31.075109 | orchestrator | 2025-08-29 17:19:31 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:31.077021 | orchestrator | 2025-08-29 17:19:31 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:31.079043 | orchestrator | 2025-08-29 17:19:31 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:31.079662 | orchestrator | 2025-08-29 17:19:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:34.121615 | orchestrator | 2025-08-29 17:19:34 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:34.123382 | orchestrator | 2025-08-29 17:19:34 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:34.124235 | orchestrator | 2025-08-29 17:19:34 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:34.125667 | orchestrator | 2025-08-29 17:19:34 | INFO  | Task 8873a640-f4f1-4580-89ae-521462ea5425 is in state SUCCESS 2025-08-29 17:19:34.126058 | orchestrator | 2025-08-29 17:19:34.126102 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-08-29 17:19:34.126123 | orchestrator | 2025-08-29 17:19:34.126141 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-08-29 17:19:34.126160 | orchestrator | Friday 29 August 2025 17:19:17 +0000 (0:00:01.434) 0:00:01.434 ********* 2025-08-29 17:19:34.126179 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:19:34.126193 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:19:34.126228 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:19:34.126240 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:19:34.126251 | orchestrator | changed: [testbed-manager] 2025-08-29 17:19:34.126261 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:19:34.126272 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:19:34.126283 | orchestrator | 2025-08-29 17:19:34.126293 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-08-29 17:19:34.126304 | orchestrator | Friday 29 August 2025 17:19:21 +0000 (0:00:04.616) 0:00:06.050 ********* 2025-08-29 17:19:34.126445 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 17:19:34.126458 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 17:19:34.126468 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 17:19:34.126479 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 17:19:34.126490 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 17:19:34.126500 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 17:19:34.126527 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-08-29 17:19:34.126549 | orchestrator | 2025-08-29 17:19:34.126560 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-08-29 17:19:34.126571 | orchestrator | Friday 29 August 2025 17:19:24 +0000 (0:00:02.823) 0:00:08.873 ********* 2025-08-29 17:19:34.126585 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 17:19:22.793956', 'end': '2025-08-29 17:19:22.803835', 'delta': '0:00:00.009879', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 17:19:34.126607 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 17:19:22.869867', 'end': '2025-08-29 17:19:22.879087', 'delta': '0:00:00.009220', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 17:19:34.126619 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 17:19:22.832002', 'end': '2025-08-29 17:19:22.842128', 'delta': '0:00:00.010126', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 17:19:34.126652 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 17:19:22.686489', 'end': '2025-08-29 17:19:22.695458', 'delta': '0:00:00.008969', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 17:19:34.126680 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 17:19:22.825331', 'end': '2025-08-29 17:19:22.833776', 'delta': '0:00:00.008445', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 17:19:34.126692 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 17:19:23.327426', 'end': '2025-08-29 17:19:23.337411', 'delta': '0:00:00.009985', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 17:19:34.126703 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 17:19:24.333600', 'end': '2025-08-29 17:19:24.384151', 'delta': '0:00:00.050551', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 17:19:34.126715 | orchestrator | 2025-08-29 17:19:34.126726 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-08-29 17:19:34.126737 | orchestrator | Friday 29 August 2025 17:19:28 +0000 (0:00:03.450) 0:00:12.324 ********* 2025-08-29 17:19:34.126748 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 17:19:34.126759 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 17:19:34.126770 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 17:19:34.126780 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 17:19:34.126791 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 17:19:34.126802 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-08-29 17:19:34.126813 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 17:19:34.126830 | orchestrator | 2025-08-29 17:19:34.126840 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-08-29 17:19:34.126851 | orchestrator | Friday 29 August 2025 17:19:30 +0000 (0:00:02.092) 0:00:14.417 ********* 2025-08-29 17:19:34.126862 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 17:19:34.126873 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 17:19:34.126884 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 17:19:34.126895 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 17:19:34.126905 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-08-29 17:19:34.126916 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 17:19:34.126927 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 17:19:34.126938 | orchestrator | 2025-08-29 17:19:34.126949 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:19:34.126971 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:19:34.126984 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:19:34.126995 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:19:34.127006 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:19:34.127018 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:19:34.127028 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:19:34.127039 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:19:34.127050 | orchestrator | 2025-08-29 17:19:34.127062 | orchestrator | 2025-08-29 17:19:34.127075 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:19:34.127087 | orchestrator | Friday 29 August 2025 17:19:33 +0000 (0:00:03.124) 0:00:17.542 ********* 2025-08-29 17:19:34.127098 | orchestrator | =============================================================================== 2025-08-29 17:19:34.127110 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.62s 2025-08-29 17:19:34.127122 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.45s 2025-08-29 17:19:34.127134 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.12s 2025-08-29 17:19:34.127146 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.82s 2025-08-29 17:19:34.127158 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.09s 2025-08-29 17:19:34.127171 | orchestrator | 2025-08-29 17:19:34 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:34.128161 | orchestrator | 2025-08-29 17:19:34 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:34.128471 | orchestrator | 2025-08-29 17:19:34 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:34.131207 | orchestrator | 2025-08-29 17:19:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:37.211099 | orchestrator | 2025-08-29 17:19:37 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:37.214676 | orchestrator | 2025-08-29 17:19:37 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:37.218098 | orchestrator | 2025-08-29 17:19:37 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:37.218764 | orchestrator | 2025-08-29 17:19:37 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:19:37.219494 | orchestrator | 2025-08-29 17:19:37 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:37.220022 | orchestrator | 2025-08-29 17:19:37 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:37.226520 | orchestrator | 2025-08-29 17:19:37 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:37.226546 | orchestrator | 2025-08-29 17:19:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:40.334300 | orchestrator | 2025-08-29 17:19:40 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:40.334443 | orchestrator | 2025-08-29 17:19:40 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:40.334458 | orchestrator | 2025-08-29 17:19:40 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:40.334470 | orchestrator | 2025-08-29 17:19:40 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:19:40.334481 | orchestrator | 2025-08-29 17:19:40 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:40.334492 | orchestrator | 2025-08-29 17:19:40 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:40.334503 | orchestrator | 2025-08-29 17:19:40 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:40.334515 | orchestrator | 2025-08-29 17:19:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:43.747113 | orchestrator | 2025-08-29 17:19:43 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:43.747226 | orchestrator | 2025-08-29 17:19:43 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:43.747241 | orchestrator | 2025-08-29 17:19:43 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:43.747254 | orchestrator | 2025-08-29 17:19:43 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:19:43.747265 | orchestrator | 2025-08-29 17:19:43 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:43.747275 | orchestrator | 2025-08-29 17:19:43 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:43.747286 | orchestrator | 2025-08-29 17:19:43 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:43.747297 | orchestrator | 2025-08-29 17:19:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:46.665854 | orchestrator | 2025-08-29 17:19:46 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:46.666471 | orchestrator | 2025-08-29 17:19:46 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:46.667043 | orchestrator | 2025-08-29 17:19:46 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:46.669179 | orchestrator | 2025-08-29 17:19:46 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:19:46.671367 | orchestrator | 2025-08-29 17:19:46 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:46.672896 | orchestrator | 2025-08-29 17:19:46 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:46.675285 | orchestrator | 2025-08-29 17:19:46 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:46.675310 | orchestrator | 2025-08-29 17:19:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:49.763894 | orchestrator | 2025-08-29 17:19:49 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:49.767101 | orchestrator | 2025-08-29 17:19:49 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:49.769356 | orchestrator | 2025-08-29 17:19:49 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:49.770719 | orchestrator | 2025-08-29 17:19:49 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:19:49.771624 | orchestrator | 2025-08-29 17:19:49 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:49.772709 | orchestrator | 2025-08-29 17:19:49 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:49.773887 | orchestrator | 2025-08-29 17:19:49 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:49.773926 | orchestrator | 2025-08-29 17:19:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:52.841554 | orchestrator | 2025-08-29 17:19:52 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:52.841657 | orchestrator | 2025-08-29 17:19:52 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:52.841672 | orchestrator | 2025-08-29 17:19:52 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:52.841684 | orchestrator | 2025-08-29 17:19:52 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:19:52.843557 | orchestrator | 2025-08-29 17:19:52 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:52.843909 | orchestrator | 2025-08-29 17:19:52 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:52.844662 | orchestrator | 2025-08-29 17:19:52 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:52.844683 | orchestrator | 2025-08-29 17:19:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:56.085862 | orchestrator | 2025-08-29 17:19:55 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:56.085965 | orchestrator | 2025-08-29 17:19:55 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:56.085981 | orchestrator | 2025-08-29 17:19:55 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:56.085993 | orchestrator | 2025-08-29 17:19:55 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:19:56.086004 | orchestrator | 2025-08-29 17:19:55 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:56.086062 | orchestrator | 2025-08-29 17:19:55 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:56.086075 | orchestrator | 2025-08-29 17:19:55 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:56.086086 | orchestrator | 2025-08-29 17:19:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:19:59.047773 | orchestrator | 2025-08-29 17:19:59 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:19:59.051487 | orchestrator | 2025-08-29 17:19:59 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:19:59.054886 | orchestrator | 2025-08-29 17:19:59 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:19:59.059135 | orchestrator | 2025-08-29 17:19:59 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:19:59.072877 | orchestrator | 2025-08-29 17:19:59 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:19:59.083173 | orchestrator | 2025-08-29 17:19:59 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:19:59.085202 | orchestrator | 2025-08-29 17:19:59 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:19:59.085226 | orchestrator | 2025-08-29 17:19:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:02.228266 | orchestrator | 2025-08-29 17:20:02 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state STARTED 2025-08-29 17:20:02.228455 | orchestrator | 2025-08-29 17:20:02 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:02.228481 | orchestrator | 2025-08-29 17:20:02 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:20:02.228502 | orchestrator | 2025-08-29 17:20:02 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:20:02.228520 | orchestrator | 2025-08-29 17:20:02 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:20:02.228538 | orchestrator | 2025-08-29 17:20:02 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:02.240245 | orchestrator | 2025-08-29 17:20:02 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:02.240379 | orchestrator | 2025-08-29 17:20:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:05.266196 | orchestrator | 2025-08-29 17:20:05 | INFO  | Task f897b9a8-1b08-4a60-ad35-06d32d12226a is in state SUCCESS 2025-08-29 17:20:05.275307 | orchestrator | 2025-08-29 17:20:05 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:05.275380 | orchestrator | 2025-08-29 17:20:05 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:20:05.275392 | orchestrator | 2025-08-29 17:20:05 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:20:05.275404 | orchestrator | 2025-08-29 17:20:05 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:20:05.275415 | orchestrator | 2025-08-29 17:20:05 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:05.278848 | orchestrator | 2025-08-29 17:20:05 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:05.278891 | orchestrator | 2025-08-29 17:20:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:08.364982 | orchestrator | 2025-08-29 17:20:08 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:08.365078 | orchestrator | 2025-08-29 17:20:08 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state STARTED 2025-08-29 17:20:08.365093 | orchestrator | 2025-08-29 17:20:08 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:20:08.368257 | orchestrator | 2025-08-29 17:20:08 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:20:08.368294 | orchestrator | 2025-08-29 17:20:08 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:08.368313 | orchestrator | 2025-08-29 17:20:08 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:08.368374 | orchestrator | 2025-08-29 17:20:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:11.544103 | orchestrator | 2025-08-29 17:20:11 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:11.544179 | orchestrator | 2025-08-29 17:20:11 | INFO  | Task 9ddef79a-aded-41d3-ba9c-d7e547e47abc is in state SUCCESS 2025-08-29 17:20:11.544193 | orchestrator | 2025-08-29 17:20:11 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:20:11.544205 | orchestrator | 2025-08-29 17:20:11 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:20:11.544216 | orchestrator | 2025-08-29 17:20:11 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:11.544226 | orchestrator | 2025-08-29 17:20:11 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:11.544237 | orchestrator | 2025-08-29 17:20:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:14.521709 | orchestrator | 2025-08-29 17:20:14 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:14.521818 | orchestrator | 2025-08-29 17:20:14 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:20:14.523226 | orchestrator | 2025-08-29 17:20:14 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:20:14.525191 | orchestrator | 2025-08-29 17:20:14 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:14.526707 | orchestrator | 2025-08-29 17:20:14 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:14.527005 | orchestrator | 2025-08-29 17:20:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:17.625405 | orchestrator | 2025-08-29 17:20:17 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:17.626358 | orchestrator | 2025-08-29 17:20:17 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:20:17.629064 | orchestrator | 2025-08-29 17:20:17 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:20:17.635826 | orchestrator | 2025-08-29 17:20:17 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:17.636714 | orchestrator | 2025-08-29 17:20:17 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:17.636750 | orchestrator | 2025-08-29 17:20:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:20.807243 | orchestrator | 2025-08-29 17:20:20 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:20.807409 | orchestrator | 2025-08-29 17:20:20 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:20:20.807428 | orchestrator | 2025-08-29 17:20:20 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:20:20.809908 | orchestrator | 2025-08-29 17:20:20 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:20.810411 | orchestrator | 2025-08-29 17:20:20 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:20.810437 | orchestrator | 2025-08-29 17:20:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:23.885110 | orchestrator | 2025-08-29 17:20:23 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:23.886724 | orchestrator | 2025-08-29 17:20:23 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:20:23.894642 | orchestrator | 2025-08-29 17:20:23 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:20:23.901259 | orchestrator | 2025-08-29 17:20:23 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:23.904408 | orchestrator | 2025-08-29 17:20:23 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:23.910307 | orchestrator | 2025-08-29 17:20:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:27.067538 | orchestrator | 2025-08-29 17:20:27 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:27.067634 | orchestrator | 2025-08-29 17:20:27 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:20:27.067649 | orchestrator | 2025-08-29 17:20:27 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:20:27.067660 | orchestrator | 2025-08-29 17:20:27 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:27.067671 | orchestrator | 2025-08-29 17:20:27 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:27.067682 | orchestrator | 2025-08-29 17:20:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:30.153400 | orchestrator | 2025-08-29 17:20:30 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:30.154622 | orchestrator | 2025-08-29 17:20:30 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:20:30.157095 | orchestrator | 2025-08-29 17:20:30 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:20:30.159216 | orchestrator | 2025-08-29 17:20:30 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:30.161911 | orchestrator | 2025-08-29 17:20:30 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:30.162151 | orchestrator | 2025-08-29 17:20:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:33.252921 | orchestrator | 2025-08-29 17:20:33 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:33.254075 | orchestrator | 2025-08-29 17:20:33 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:20:33.258952 | orchestrator | 2025-08-29 17:20:33 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:20:33.258979 | orchestrator | 2025-08-29 17:20:33 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:33.260658 | orchestrator | 2025-08-29 17:20:33 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:33.260828 | orchestrator | 2025-08-29 17:20:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:36.312003 | orchestrator | 2025-08-29 17:20:36 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:36.319630 | orchestrator | 2025-08-29 17:20:36 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:20:36.323404 | orchestrator | 2025-08-29 17:20:36 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:20:36.328452 | orchestrator | 2025-08-29 17:20:36 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:36.328944 | orchestrator | 2025-08-29 17:20:36 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:36.328977 | orchestrator | 2025-08-29 17:20:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:39.401381 | orchestrator | 2025-08-29 17:20:39 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:39.418578 | orchestrator | 2025-08-29 17:20:39 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state STARTED 2025-08-29 17:20:39.468678 | orchestrator | 2025-08-29 17:20:39 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:20:39.478754 | orchestrator | 2025-08-29 17:20:39 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:39.483423 | orchestrator | 2025-08-29 17:20:39 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:39.486226 | orchestrator | 2025-08-29 17:20:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:42.533352 | orchestrator | 2025-08-29 17:20:42 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:42.533454 | orchestrator | 2025-08-29 17:20:42 | INFO  | Task 8e4e2a83-0073-4e36-b9d7-9797cf6d183f is in state SUCCESS 2025-08-29 17:20:42.534256 | orchestrator | 2025-08-29 17:20:42.534282 | orchestrator | 2025-08-29 17:20:42.534292 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-08-29 17:20:42.534302 | orchestrator | 2025-08-29 17:20:42.534338 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-08-29 17:20:42.534349 | orchestrator | Friday 29 August 2025 17:19:17 +0000 (0:00:01.074) 0:00:01.074 ********* 2025-08-29 17:20:42.534359 | orchestrator | ok: [testbed-manager] => { 2025-08-29 17:20:42.534369 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-08-29 17:20:42.534379 | orchestrator | } 2025-08-29 17:20:42.534388 | orchestrator | 2025-08-29 17:20:42.534397 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-08-29 17:20:42.534406 | orchestrator | Friday 29 August 2025 17:19:17 +0000 (0:00:00.524) 0:00:01.599 ********* 2025-08-29 17:20:42.534415 | orchestrator | ok: [testbed-manager] 2025-08-29 17:20:42.534425 | orchestrator | 2025-08-29 17:20:42.534434 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-08-29 17:20:42.534442 | orchestrator | Friday 29 August 2025 17:19:19 +0000 (0:00:01.544) 0:00:03.143 ********* 2025-08-29 17:20:42.534451 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-08-29 17:20:42.534460 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-08-29 17:20:42.534468 | orchestrator | 2025-08-29 17:20:42.534477 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-08-29 17:20:42.534485 | orchestrator | Friday 29 August 2025 17:19:21 +0000 (0:00:02.374) 0:00:05.518 ********* 2025-08-29 17:20:42.534494 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:42.534503 | orchestrator | 2025-08-29 17:20:42.534511 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-08-29 17:20:42.534520 | orchestrator | Friday 29 August 2025 17:19:25 +0000 (0:00:03.797) 0:00:09.315 ********* 2025-08-29 17:20:42.534528 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:42.534537 | orchestrator | 2025-08-29 17:20:42.534545 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-08-29 17:20:42.534555 | orchestrator | Friday 29 August 2025 17:19:28 +0000 (0:00:03.426) 0:00:12.741 ********* 2025-08-29 17:20:42.534563 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-08-29 17:20:42.534572 | orchestrator | ok: [testbed-manager] 2025-08-29 17:20:42.534580 | orchestrator | 2025-08-29 17:20:42.534589 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-08-29 17:20:42.534598 | orchestrator | Friday 29 August 2025 17:20:00 +0000 (0:00:31.554) 0:00:44.296 ********* 2025-08-29 17:20:42.534607 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:42.534616 | orchestrator | 2025-08-29 17:20:42.534624 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:20:42.534655 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:20:42.534665 | orchestrator | 2025-08-29 17:20:42.534674 | orchestrator | 2025-08-29 17:20:42.534697 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:20:42.534705 | orchestrator | Friday 29 August 2025 17:20:03 +0000 (0:00:02.786) 0:00:47.082 ********* 2025-08-29 17:20:42.534714 | orchestrator | =============================================================================== 2025-08-29 17:20:42.534723 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 31.55s 2025-08-29 17:20:42.534731 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.80s 2025-08-29 17:20:42.534740 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 3.43s 2025-08-29 17:20:42.534748 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.79s 2025-08-29 17:20:42.534757 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.38s 2025-08-29 17:20:42.534766 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.54s 2025-08-29 17:20:42.534774 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.52s 2025-08-29 17:20:42.534783 | orchestrator | 2025-08-29 17:20:42.534791 | orchestrator | 2025-08-29 17:20:42.534800 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-08-29 17:20:42.534809 | orchestrator | 2025-08-29 17:20:42.534818 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-08-29 17:20:42.534826 | orchestrator | Friday 29 August 2025 17:19:17 +0000 (0:00:00.447) 0:00:00.447 ********* 2025-08-29 17:20:42.534835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-08-29 17:20:42.534845 | orchestrator | 2025-08-29 17:20:42.534853 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-08-29 17:20:42.534862 | orchestrator | Friday 29 August 2025 17:19:17 +0000 (0:00:00.516) 0:00:00.963 ********* 2025-08-29 17:20:42.534872 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-08-29 17:20:42.534882 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-08-29 17:20:42.534891 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-08-29 17:20:42.534901 | orchestrator | 2025-08-29 17:20:42.534911 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-08-29 17:20:42.534920 | orchestrator | Friday 29 August 2025 17:19:19 +0000 (0:00:01.470) 0:00:02.433 ********* 2025-08-29 17:20:42.534930 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:42.534940 | orchestrator | 2025-08-29 17:20:42.534949 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-08-29 17:20:42.534959 | orchestrator | Friday 29 August 2025 17:19:21 +0000 (0:00:02.448) 0:00:04.882 ********* 2025-08-29 17:20:42.534980 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-08-29 17:20:42.534991 | orchestrator | ok: [testbed-manager] 2025-08-29 17:20:42.535000 | orchestrator | 2025-08-29 17:20:42.535010 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-08-29 17:20:42.535020 | orchestrator | Friday 29 August 2025 17:19:59 +0000 (0:00:37.402) 0:00:42.285 ********* 2025-08-29 17:20:42.535029 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:42.535039 | orchestrator | 2025-08-29 17:20:42.535048 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-08-29 17:20:42.535058 | orchestrator | Friday 29 August 2025 17:20:00 +0000 (0:00:01.529) 0:00:43.814 ********* 2025-08-29 17:20:42.535067 | orchestrator | ok: [testbed-manager] 2025-08-29 17:20:42.535077 | orchestrator | 2025-08-29 17:20:42.535087 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-08-29 17:20:42.535096 | orchestrator | Friday 29 August 2025 17:20:02 +0000 (0:00:02.098) 0:00:45.913 ********* 2025-08-29 17:20:42.535112 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:42.535120 | orchestrator | 2025-08-29 17:20:42.535129 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-08-29 17:20:42.535137 | orchestrator | Friday 29 August 2025 17:20:05 +0000 (0:00:02.616) 0:00:48.530 ********* 2025-08-29 17:20:42.535146 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:42.535154 | orchestrator | 2025-08-29 17:20:42.535163 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-08-29 17:20:42.535172 | orchestrator | Friday 29 August 2025 17:20:06 +0000 (0:00:01.144) 0:00:49.674 ********* 2025-08-29 17:20:42.535251 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:42.535262 | orchestrator | 2025-08-29 17:20:42.535271 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-08-29 17:20:42.535280 | orchestrator | Friday 29 August 2025 17:20:07 +0000 (0:00:00.917) 0:00:50.591 ********* 2025-08-29 17:20:42.535289 | orchestrator | ok: [testbed-manager] 2025-08-29 17:20:42.535297 | orchestrator | 2025-08-29 17:20:42.535306 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:20:42.535337 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:20:42.535346 | orchestrator | 2025-08-29 17:20:42.535355 | orchestrator | 2025-08-29 17:20:42.535364 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:20:42.535372 | orchestrator | Friday 29 August 2025 17:20:08 +0000 (0:00:00.934) 0:00:51.525 ********* 2025-08-29 17:20:42.535381 | orchestrator | =============================================================================== 2025-08-29 17:20:42.535390 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.40s 2025-08-29 17:20:42.535399 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.62s 2025-08-29 17:20:42.535407 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.45s 2025-08-29 17:20:42.535416 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 2.10s 2025-08-29 17:20:42.535430 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.53s 2025-08-29 17:20:42.535439 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.47s 2025-08-29 17:20:42.535448 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.14s 2025-08-29 17:20:42.535457 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.93s 2025-08-29 17:20:42.535466 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.92s 2025-08-29 17:20:42.535474 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.52s 2025-08-29 17:20:42.535483 | orchestrator | 2025-08-29 17:20:42.535492 | orchestrator | 2025-08-29 17:20:42.535501 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-08-29 17:20:42.535510 | orchestrator | 2025-08-29 17:20:42.535518 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-08-29 17:20:42.535527 | orchestrator | Friday 29 August 2025 17:19:38 +0000 (0:00:00.279) 0:00:00.279 ********* 2025-08-29 17:20:42.535536 | orchestrator | ok: [testbed-manager] 2025-08-29 17:20:42.535544 | orchestrator | 2025-08-29 17:20:42.535553 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-08-29 17:20:42.535562 | orchestrator | Friday 29 August 2025 17:19:38 +0000 (0:00:00.875) 0:00:01.155 ********* 2025-08-29 17:20:42.535570 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-08-29 17:20:42.535579 | orchestrator | 2025-08-29 17:20:42.535588 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-08-29 17:20:42.535597 | orchestrator | Friday 29 August 2025 17:19:39 +0000 (0:00:00.663) 0:00:01.819 ********* 2025-08-29 17:20:42.535605 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:42.535614 | orchestrator | 2025-08-29 17:20:42.535623 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-08-29 17:20:42.535638 | orchestrator | Friday 29 August 2025 17:19:42 +0000 (0:00:02.575) 0:00:04.395 ********* 2025-08-29 17:20:42.535647 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-08-29 17:20:42.535656 | orchestrator | ok: [testbed-manager] 2025-08-29 17:20:42.535665 | orchestrator | 2025-08-29 17:20:42.535674 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-08-29 17:20:42.535683 | orchestrator | Friday 29 August 2025 17:20:33 +0000 (0:00:51.758) 0:00:56.153 ********* 2025-08-29 17:20:42.535759 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:42.535777 | orchestrator | 2025-08-29 17:20:42.535787 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:20:42.535796 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:20:42.535805 | orchestrator | 2025-08-29 17:20:42.535813 | orchestrator | 2025-08-29 17:20:42.535822 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:20:42.535838 | orchestrator | Friday 29 August 2025 17:20:40 +0000 (0:00:06.412) 0:01:02.566 ********* 2025-08-29 17:20:42.535847 | orchestrator | =============================================================================== 2025-08-29 17:20:42.535855 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 51.76s 2025-08-29 17:20:42.535864 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 6.41s 2025-08-29 17:20:42.535873 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.58s 2025-08-29 17:20:42.535881 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.87s 2025-08-29 17:20:42.535890 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.66s 2025-08-29 17:20:42.535899 | orchestrator | 2025-08-29 17:20:42 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state STARTED 2025-08-29 17:20:42.535908 | orchestrator | 2025-08-29 17:20:42 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:42.536826 | orchestrator | 2025-08-29 17:20:42 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:42.538109 | orchestrator | 2025-08-29 17:20:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:45.580600 | orchestrator | 2025-08-29 17:20:45 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:45.580894 | orchestrator | 2025-08-29 17:20:45 | INFO  | Task 7d39414c-3475-477f-a23c-ed863af1ef5b is in state SUCCESS 2025-08-29 17:20:45.580941 | orchestrator | 2025-08-29 17:20:45.580959 | orchestrator | 2025-08-29 17:20:45.580970 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:20:45.580981 | orchestrator | 2025-08-29 17:20:45.580991 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:20:45.581001 | orchestrator | Friday 29 August 2025 17:19:18 +0000 (0:00:01.273) 0:00:01.273 ********* 2025-08-29 17:20:45.581012 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-08-29 17:20:45.581022 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-08-29 17:20:45.581032 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-08-29 17:20:45.581041 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-08-29 17:20:45.581051 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-08-29 17:20:45.581061 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-08-29 17:20:45.581071 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-08-29 17:20:45.581081 | orchestrator | 2025-08-29 17:20:45.581090 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-08-29 17:20:45.581101 | orchestrator | 2025-08-29 17:20:45.581118 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-08-29 17:20:45.581174 | orchestrator | Friday 29 August 2025 17:19:19 +0000 (0:00:01.462) 0:00:02.736 ********* 2025-08-29 17:20:45.581218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:20:45.581239 | orchestrator | 2025-08-29 17:20:45.581249 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-08-29 17:20:45.581259 | orchestrator | Friday 29 August 2025 17:19:20 +0000 (0:00:01.198) 0:00:03.934 ********* 2025-08-29 17:20:45.581269 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:20:45.581280 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:20:45.581289 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:20:45.581299 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:20:45.581335 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:20:45.581347 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:20:45.581356 | orchestrator | ok: [testbed-manager] 2025-08-29 17:20:45.581366 | orchestrator | 2025-08-29 17:20:45.581375 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-08-29 17:20:45.581385 | orchestrator | Friday 29 August 2025 17:19:22 +0000 (0:00:01.322) 0:00:05.257 ********* 2025-08-29 17:20:45.581394 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:20:45.581404 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:20:45.581413 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:20:45.581422 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:20:45.581432 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:20:45.581442 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:20:45.581453 | orchestrator | ok: [testbed-manager] 2025-08-29 17:20:45.581463 | orchestrator | 2025-08-29 17:20:45.581474 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-08-29 17:20:45.581484 | orchestrator | Friday 29 August 2025 17:19:25 +0000 (0:00:03.045) 0:00:08.302 ********* 2025-08-29 17:20:45.581495 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:20:45.581506 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:20:45.581517 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:20:45.581528 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:20:45.581538 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:20:45.581549 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:20:45.581559 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:45.581569 | orchestrator | 2025-08-29 17:20:45.581619 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-08-29 17:20:45.581631 | orchestrator | Friday 29 August 2025 17:19:29 +0000 (0:00:03.784) 0:00:12.087 ********* 2025-08-29 17:20:45.581641 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:20:45.581650 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:20:45.581659 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:20:45.581668 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:20:45.581678 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:20:45.581687 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:20:45.581696 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:45.581706 | orchestrator | 2025-08-29 17:20:45.581715 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-08-29 17:20:45.581725 | orchestrator | Friday 29 August 2025 17:19:42 +0000 (0:00:13.772) 0:00:25.859 ********* 2025-08-29 17:20:45.581734 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:20:45.581743 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:20:45.581753 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:20:45.581762 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:20:45.581771 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:20:45.581781 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:20:45.581906 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:45.581917 | orchestrator | 2025-08-29 17:20:45.581927 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-08-29 17:20:45.581948 | orchestrator | Friday 29 August 2025 17:20:14 +0000 (0:00:31.338) 0:00:57.198 ********* 2025-08-29 17:20:45.581959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:20:45.581971 | orchestrator | 2025-08-29 17:20:45.581981 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-08-29 17:20:45.581990 | orchestrator | Friday 29 August 2025 17:20:15 +0000 (0:00:01.280) 0:00:58.478 ********* 2025-08-29 17:20:45.582000 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-08-29 17:20:45.582010 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-08-29 17:20:45.582078 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-08-29 17:20:45.582089 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-08-29 17:20:45.582116 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-08-29 17:20:45.582133 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-08-29 17:20:45.582148 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-08-29 17:20:45.582163 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-08-29 17:20:45.582179 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-08-29 17:20:45.582196 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-08-29 17:20:45.582212 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-08-29 17:20:45.582227 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-08-29 17:20:45.582237 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-08-29 17:20:45.582246 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-08-29 17:20:45.582256 | orchestrator | 2025-08-29 17:20:45.582265 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-08-29 17:20:45.582276 | orchestrator | Friday 29 August 2025 17:20:23 +0000 (0:00:07.901) 0:01:06.379 ********* 2025-08-29 17:20:45.582286 | orchestrator | ok: [testbed-manager] 2025-08-29 17:20:45.582296 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:20:45.582305 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:20:45.582348 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:20:45.582358 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:20:45.582368 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:20:45.582377 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:20:45.582387 | orchestrator | 2025-08-29 17:20:45.582396 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-08-29 17:20:45.582406 | orchestrator | Friday 29 August 2025 17:20:25 +0000 (0:00:01.851) 0:01:08.231 ********* 2025-08-29 17:20:45.582416 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:20:45.582425 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:45.582435 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:20:45.582444 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:20:45.582454 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:20:45.582463 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:20:45.582472 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:20:45.582482 | orchestrator | 2025-08-29 17:20:45.582491 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-08-29 17:20:45.582501 | orchestrator | Friday 29 August 2025 17:20:27 +0000 (0:00:02.562) 0:01:10.794 ********* 2025-08-29 17:20:45.582511 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:20:45.582520 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:20:45.582531 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:20:45.582542 | orchestrator | ok: [testbed-manager] 2025-08-29 17:20:45.582552 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:20:45.582563 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:20:45.582574 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:20:45.582584 | orchestrator | 2025-08-29 17:20:45.582595 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-08-29 17:20:45.582614 | orchestrator | Friday 29 August 2025 17:20:30 +0000 (0:00:02.470) 0:01:13.265 ********* 2025-08-29 17:20:45.582625 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:20:45.582636 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:20:45.582647 | orchestrator | ok: [testbed-manager] 2025-08-29 17:20:45.582657 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:20:45.582668 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:20:45.582679 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:20:45.582689 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:20:45.582700 | orchestrator | 2025-08-29 17:20:45.582711 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-08-29 17:20:45.582722 | orchestrator | Friday 29 August 2025 17:20:33 +0000 (0:00:03.481) 0:01:16.748 ********* 2025-08-29 17:20:45.582733 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-08-29 17:20:45.582747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:20:45.582758 | orchestrator | 2025-08-29 17:20:45.582769 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-08-29 17:20:45.582779 | orchestrator | Friday 29 August 2025 17:20:35 +0000 (0:00:01.987) 0:01:18.736 ********* 2025-08-29 17:20:45.582789 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:45.582798 | orchestrator | 2025-08-29 17:20:45.582808 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-08-29 17:20:45.582817 | orchestrator | Friday 29 August 2025 17:20:38 +0000 (0:00:03.190) 0:01:21.926 ********* 2025-08-29 17:20:45.582827 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:20:45.582836 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:20:45.582846 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:20:45.582855 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:45.582865 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:20:45.582874 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:20:45.582884 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:20:45.582893 | orchestrator | 2025-08-29 17:20:45.582903 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:20:45.582912 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:20:45.582923 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:20:45.582933 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:20:45.582943 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:20:45.582960 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:20:45.582969 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:20:45.582979 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:20:45.582989 | orchestrator | 2025-08-29 17:20:45.582999 | orchestrator | 2025-08-29 17:20:45.583008 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:20:45.583018 | orchestrator | Friday 29 August 2025 17:20:42 +0000 (0:00:03.593) 0:01:25.520 ********* 2025-08-29 17:20:45.583028 | orchestrator | =============================================================================== 2025-08-29 17:20:45.583153 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 31.34s 2025-08-29 17:20:45.583184 | orchestrator | osism.services.netdata : Add repository -------------------------------- 13.77s 2025-08-29 17:20:45.583201 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.90s 2025-08-29 17:20:45.583225 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.78s 2025-08-29 17:20:45.583240 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.59s 2025-08-29 17:20:45.583250 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.48s 2025-08-29 17:20:45.583259 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.19s 2025-08-29 17:20:45.583268 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.05s 2025-08-29 17:20:45.583278 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.56s 2025-08-29 17:20:45.583288 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.47s 2025-08-29 17:20:45.583388 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.99s 2025-08-29 17:20:45.583399 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.85s 2025-08-29 17:20:45.583409 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.46s 2025-08-29 17:20:45.583419 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.32s 2025-08-29 17:20:45.583428 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.28s 2025-08-29 17:20:45.583438 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.20s 2025-08-29 17:20:45.583448 | orchestrator | 2025-08-29 17:20:45 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:45.583463 | orchestrator | 2025-08-29 17:20:45 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:45.583473 | orchestrator | 2025-08-29 17:20:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:48.623476 | orchestrator | 2025-08-29 17:20:48 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:48.625411 | orchestrator | 2025-08-29 17:20:48 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:48.627339 | orchestrator | 2025-08-29 17:20:48 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:48.627386 | orchestrator | 2025-08-29 17:20:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:51.673148 | orchestrator | 2025-08-29 17:20:51 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:51.680974 | orchestrator | 2025-08-29 17:20:51 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:51.682441 | orchestrator | 2025-08-29 17:20:51 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:51.682477 | orchestrator | 2025-08-29 17:20:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:54.726715 | orchestrator | 2025-08-29 17:20:54 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:54.728930 | orchestrator | 2025-08-29 17:20:54 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:54.731099 | orchestrator | 2025-08-29 17:20:54 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:54.731126 | orchestrator | 2025-08-29 17:20:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:20:57.790012 | orchestrator | 2025-08-29 17:20:57 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:20:57.793436 | orchestrator | 2025-08-29 17:20:57 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:20:57.795358 | orchestrator | 2025-08-29 17:20:57 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:20:57.795688 | orchestrator | 2025-08-29 17:20:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:00.840179 | orchestrator | 2025-08-29 17:21:00 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:00.841428 | orchestrator | 2025-08-29 17:21:00 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:00.844658 | orchestrator | 2025-08-29 17:21:00 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:00.844691 | orchestrator | 2025-08-29 17:21:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:03.890294 | orchestrator | 2025-08-29 17:21:03 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:03.900209 | orchestrator | 2025-08-29 17:21:03 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:03.903105 | orchestrator | 2025-08-29 17:21:03 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:03.904672 | orchestrator | 2025-08-29 17:21:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:06.953491 | orchestrator | 2025-08-29 17:21:06 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:06.956990 | orchestrator | 2025-08-29 17:21:06 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:06.959434 | orchestrator | 2025-08-29 17:21:06 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:06.959932 | orchestrator | 2025-08-29 17:21:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:10.018357 | orchestrator | 2025-08-29 17:21:10 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:10.018916 | orchestrator | 2025-08-29 17:21:10 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:10.020041 | orchestrator | 2025-08-29 17:21:10 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:10.020357 | orchestrator | 2025-08-29 17:21:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:13.068761 | orchestrator | 2025-08-29 17:21:13 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:13.070922 | orchestrator | 2025-08-29 17:21:13 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:13.074966 | orchestrator | 2025-08-29 17:21:13 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:13.075357 | orchestrator | 2025-08-29 17:21:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:16.119556 | orchestrator | 2025-08-29 17:21:16 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:16.120840 | orchestrator | 2025-08-29 17:21:16 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:16.121763 | orchestrator | 2025-08-29 17:21:16 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:16.121787 | orchestrator | 2025-08-29 17:21:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:19.177641 | orchestrator | 2025-08-29 17:21:19 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:19.177721 | orchestrator | 2025-08-29 17:21:19 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:19.179095 | orchestrator | 2025-08-29 17:21:19 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:19.179150 | orchestrator | 2025-08-29 17:21:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:22.225719 | orchestrator | 2025-08-29 17:21:22 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:22.225806 | orchestrator | 2025-08-29 17:21:22 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:22.228792 | orchestrator | 2025-08-29 17:21:22 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:22.228805 | orchestrator | 2025-08-29 17:21:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:25.288071 | orchestrator | 2025-08-29 17:21:25 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:25.290452 | orchestrator | 2025-08-29 17:21:25 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:25.292476 | orchestrator | 2025-08-29 17:21:25 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:25.292503 | orchestrator | 2025-08-29 17:21:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:28.342176 | orchestrator | 2025-08-29 17:21:28 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:28.344543 | orchestrator | 2025-08-29 17:21:28 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:28.345764 | orchestrator | 2025-08-29 17:21:28 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:28.345815 | orchestrator | 2025-08-29 17:21:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:31.407691 | orchestrator | 2025-08-29 17:21:31 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:31.409740 | orchestrator | 2025-08-29 17:21:31 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:31.412142 | orchestrator | 2025-08-29 17:21:31 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:31.412197 | orchestrator | 2025-08-29 17:21:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:34.463075 | orchestrator | 2025-08-29 17:21:34 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:34.465041 | orchestrator | 2025-08-29 17:21:34 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:34.467175 | orchestrator | 2025-08-29 17:21:34 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:34.467670 | orchestrator | 2025-08-29 17:21:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:37.509840 | orchestrator | 2025-08-29 17:21:37 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:37.511210 | orchestrator | 2025-08-29 17:21:37 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:37.513573 | orchestrator | 2025-08-29 17:21:37 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:37.513605 | orchestrator | 2025-08-29 17:21:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:40.560654 | orchestrator | 2025-08-29 17:21:40 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:40.561981 | orchestrator | 2025-08-29 17:21:40 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:40.563660 | orchestrator | 2025-08-29 17:21:40 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:40.563870 | orchestrator | 2025-08-29 17:21:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:43.618749 | orchestrator | 2025-08-29 17:21:43 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:43.621414 | orchestrator | 2025-08-29 17:21:43 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:43.623454 | orchestrator | 2025-08-29 17:21:43 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:43.623528 | orchestrator | 2025-08-29 17:21:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:46.664974 | orchestrator | 2025-08-29 17:21:46 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:46.666000 | orchestrator | 2025-08-29 17:21:46 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:46.667139 | orchestrator | 2025-08-29 17:21:46 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:46.667166 | orchestrator | 2025-08-29 17:21:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:49.717118 | orchestrator | 2025-08-29 17:21:49 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:49.719371 | orchestrator | 2025-08-29 17:21:49 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:49.723578 | orchestrator | 2025-08-29 17:21:49 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:49.723611 | orchestrator | 2025-08-29 17:21:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:52.762991 | orchestrator | 2025-08-29 17:21:52 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:52.763720 | orchestrator | 2025-08-29 17:21:52 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:52.764898 | orchestrator | 2025-08-29 17:21:52 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:52.764921 | orchestrator | 2025-08-29 17:21:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:55.807539 | orchestrator | 2025-08-29 17:21:55 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:55.811484 | orchestrator | 2025-08-29 17:21:55 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:55.812538 | orchestrator | 2025-08-29 17:21:55 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:55.812562 | orchestrator | 2025-08-29 17:21:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:21:58.858941 | orchestrator | 2025-08-29 17:21:58 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:21:58.864746 | orchestrator | 2025-08-29 17:21:58 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:21:58.864776 | orchestrator | 2025-08-29 17:21:58 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:21:58.864808 | orchestrator | 2025-08-29 17:21:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:01.906639 | orchestrator | 2025-08-29 17:22:01 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:01.908058 | orchestrator | 2025-08-29 17:22:01 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:01.910144 | orchestrator | 2025-08-29 17:22:01 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:22:01.910172 | orchestrator | 2025-08-29 17:22:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:04.965408 | orchestrator | 2025-08-29 17:22:04 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:04.966568 | orchestrator | 2025-08-29 17:22:04 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:04.968589 | orchestrator | 2025-08-29 17:22:04 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state STARTED 2025-08-29 17:22:04.969084 | orchestrator | 2025-08-29 17:22:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:07.999872 | orchestrator | 2025-08-29 17:22:07 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:08.001645 | orchestrator | 2025-08-29 17:22:07 | INFO  | Task 9ac05191-d5ba-486a-8d99-2db3dd79f23f is in state STARTED 2025-08-29 17:22:08.002431 | orchestrator | 2025-08-29 17:22:08 | INFO  | Task 8006091f-b287-491f-8496-3df80775e12b is in state STARTED 2025-08-29 17:22:08.004770 | orchestrator | 2025-08-29 17:22:08 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:08.005326 | orchestrator | 2025-08-29 17:22:08 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:08.008549 | orchestrator | 2025-08-29 17:22:08 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:08.011804 | orchestrator | 2025-08-29 17:22:08 | INFO  | Task 0260624e-3b4c-496b-81c2-f007c03d2d91 is in state SUCCESS 2025-08-29 17:22:08.014218 | orchestrator | 2025-08-29 17:22:08.014341 | orchestrator | 2025-08-29 17:22:08.014360 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-08-29 17:22:08.014388 | orchestrator | 2025-08-29 17:22:08.014401 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-08-29 17:22:08.014425 | orchestrator | Friday 29 August 2025 17:19:05 +0000 (0:00:00.338) 0:00:00.338 ********* 2025-08-29 17:22:08.014449 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:22:08.014462 | orchestrator | 2025-08-29 17:22:08.014475 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-08-29 17:22:08.014494 | orchestrator | Friday 29 August 2025 17:19:07 +0000 (0:00:01.638) 0:00:01.977 ********* 2025-08-29 17:22:08.014512 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 17:22:08.014541 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 17:22:08.014561 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 17:22:08.014580 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 17:22:08.014597 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 17:22:08.014633 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 17:22:08.014652 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 17:22:08.014669 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 17:22:08.014687 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 17:22:08.014704 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 17:22:08.014722 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 17:22:08.014742 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 17:22:08.014761 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 17:22:08.014779 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 17:22:08.014830 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 17:22:08.014843 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 17:22:08.014854 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 17:22:08.014865 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 17:22:08.014875 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 17:22:08.014898 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 17:22:08.014910 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 17:22:08.014920 | orchestrator | 2025-08-29 17:22:08.014931 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-08-29 17:22:08.014942 | orchestrator | Friday 29 August 2025 17:19:12 +0000 (0:00:05.203) 0:00:07.180 ********* 2025-08-29 17:22:08.014954 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:22:08.014967 | orchestrator | 2025-08-29 17:22:08.014986 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-08-29 17:22:08.015004 | orchestrator | Friday 29 August 2025 17:19:14 +0000 (0:00:01.762) 0:00:08.942 ********* 2025-08-29 17:22:08.015028 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.015054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.015124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.015140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.015152 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.015175 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.015195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.015214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.015234 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.015304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.015327 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.015345 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.015391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.015417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.015435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.015454 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.015472 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.015518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.015540 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.015560 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.015590 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.015608 | orchestrator | 2025-08-29 17:22:08.015625 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-08-29 17:22:08.015641 | orchestrator | Friday 29 August 2025 17:19:19 +0000 (0:00:05.091) 0:00:14.034 ********* 2025-08-29 17:22:08.015659 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:22:08.015686 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.015706 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.015725 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:22:08.015743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:22:08.015795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.015817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.015847 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:08.015866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:22:08.015885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.015911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.015929 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:08.015947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:22:08.015965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.015984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:22:08.016068 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:08.016088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016127 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:08.016145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:22:08.016174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:22:08.016233 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:08.016265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016336 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:08.016347 | orchestrator | 2025-08-29 17:22:08.016358 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-08-29 17:22:08.016370 | orchestrator | Friday 29 August 2025 17:19:21 +0000 (0:00:01.536) 0:00:15.570 ********* 2025-08-29 17:22:08.016381 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:22:08.016393 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016410 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:22:08.016433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016468 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:22:08.016494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:22:08.016510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:22:08.016577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016616 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:08.016636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:22:08.016660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016691 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:08.016702 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:08.016712 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:08.016723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:22:08.016734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016756 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:08.016767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:22:08.016779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.016807 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:08.016818 | orchestrator | 2025-08-29 17:22:08.016829 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-08-29 17:22:08.016840 | orchestrator | Friday 29 August 2025 17:19:24 +0000 (0:00:03.168) 0:00:18.739 ********* 2025-08-29 17:22:08.016850 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:22:08.016861 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:08.016872 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:08.016882 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:08.016893 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:08.016909 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:08.016920 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:08.016931 | orchestrator | 2025-08-29 17:22:08.016942 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-08-29 17:22:08.016953 | orchestrator | Friday 29 August 2025 17:19:25 +0000 (0:00:01.170) 0:00:19.910 ********* 2025-08-29 17:22:08.016964 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:22:08.016974 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:08.016985 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:08.016995 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:08.017005 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:08.017016 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:08.017033 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:08.017044 | orchestrator | 2025-08-29 17:22:08.017055 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-08-29 17:22:08.017066 | orchestrator | Friday 29 August 2025 17:19:26 +0000 (0:00:01.029) 0:00:20.939 ********* 2025-08-29 17:22:08.017078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.017089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.017100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.017117 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.017135 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.017146 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.017164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.017176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.017187 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.017198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.017221 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.017233 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.017244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.017262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.017313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.017329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.017340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.017351 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.017372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.017389 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.017401 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.017412 | orchestrator | 2025-08-29 17:22:08.017423 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-08-29 17:22:08.017434 | orchestrator | Friday 29 August 2025 17:19:35 +0000 (0:00:08.878) 0:00:29.818 ********* 2025-08-29 17:22:08.017445 | orchestrator | [WARNING]: Skipped 2025-08-29 17:22:08.017459 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-08-29 17:22:08.017471 | orchestrator | to this access issue: 2025-08-29 17:22:08.017491 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-08-29 17:22:08.017509 | orchestrator | directory 2025-08-29 17:22:08.017527 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:22:08.017545 | orchestrator | 2025-08-29 17:22:08.017561 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-08-29 17:22:08.017579 | orchestrator | Friday 29 August 2025 17:19:37 +0000 (0:00:02.021) 0:00:31.839 ********* 2025-08-29 17:22:08.017598 | orchestrator | [WARNING]: Skipped 2025-08-29 17:22:08.017618 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-08-29 17:22:08.017646 | orchestrator | to this access issue: 2025-08-29 17:22:08.017658 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-08-29 17:22:08.017669 | orchestrator | directory 2025-08-29 17:22:08.017680 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:22:08.017691 | orchestrator | 2025-08-29 17:22:08.017702 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-08-29 17:22:08.017713 | orchestrator | Friday 29 August 2025 17:19:38 +0000 (0:00:01.332) 0:00:33.172 ********* 2025-08-29 17:22:08.017723 | orchestrator | [WARNING]: Skipped 2025-08-29 17:22:08.017734 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-08-29 17:22:08.017745 | orchestrator | to this access issue: 2025-08-29 17:22:08.017756 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-08-29 17:22:08.017767 | orchestrator | directory 2025-08-29 17:22:08.017778 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:22:08.017788 | orchestrator | 2025-08-29 17:22:08.017799 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-08-29 17:22:08.017810 | orchestrator | Friday 29 August 2025 17:19:39 +0000 (0:00:01.068) 0:00:34.241 ********* 2025-08-29 17:22:08.017821 | orchestrator | [WARNING]: Skipped 2025-08-29 17:22:08.017832 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-08-29 17:22:08.017852 | orchestrator | to this access issue: 2025-08-29 17:22:08.017863 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-08-29 17:22:08.017874 | orchestrator | directory 2025-08-29 17:22:08.017885 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:22:08.017896 | orchestrator | 2025-08-29 17:22:08.017906 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-08-29 17:22:08.017917 | orchestrator | Friday 29 August 2025 17:19:40 +0000 (0:00:01.058) 0:00:35.299 ********* 2025-08-29 17:22:08.017928 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:08.017939 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:08.017949 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:08.017960 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:22:08.017971 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:22:08.017981 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:22:08.017992 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:08.018003 | orchestrator | 2025-08-29 17:22:08.018013 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-08-29 17:22:08.018061 | orchestrator | Friday 29 August 2025 17:19:46 +0000 (0:00:05.674) 0:00:40.974 ********* 2025-08-29 17:22:08.018079 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 17:22:08.018099 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 17:22:08.018116 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 17:22:08.018134 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 17:22:08.018152 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 17:22:08.018178 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 17:22:08.018199 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 17:22:08.018218 | orchestrator | 2025-08-29 17:22:08.018231 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-08-29 17:22:08.018242 | orchestrator | Friday 29 August 2025 17:19:50 +0000 (0:00:04.467) 0:00:45.442 ********* 2025-08-29 17:22:08.018253 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:08.018264 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:22:08.018295 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:08.018306 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:08.018317 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:22:08.018328 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:22:08.018338 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:08.018349 | orchestrator | 2025-08-29 17:22:08.018360 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-08-29 17:22:08.018371 | orchestrator | Friday 29 August 2025 17:19:55 +0000 (0:00:04.191) 0:00:49.634 ********* 2025-08-29 17:22:08.018382 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.018403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.018423 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.018435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.018447 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.018460 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.018476 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.018488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.018500 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.018527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.018540 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.018551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.018563 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.018574 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.018590 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.018601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.018613 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.018637 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.018649 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.018660 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:22:08.018672 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.018683 | orchestrator | 2025-08-29 17:22:08.018695 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-08-29 17:22:08.018706 | orchestrator | Friday 29 August 2025 17:19:59 +0000 (0:00:04.457) 0:00:54.091 ********* 2025-08-29 17:22:08.018717 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 17:22:08.018728 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 17:22:08.018739 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 17:22:08.018750 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 17:22:08.018761 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 17:22:08.018776 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 17:22:08.018787 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 17:22:08.018798 | orchestrator | 2025-08-29 17:22:08.018809 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-08-29 17:22:08.018820 | orchestrator | Friday 29 August 2025 17:20:04 +0000 (0:00:04.577) 0:00:58.668 ********* 2025-08-29 17:22:08.018831 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 17:22:08.018842 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 17:22:08.018853 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 17:22:08.018869 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 17:22:08.018880 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 17:22:08.018891 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 17:22:08.018902 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 17:22:08.018912 | orchestrator | 2025-08-29 17:22:08.018924 | orchestrator | TASK [common : Check common containers] **************************************** 2025-08-29 17:22:08.018934 | orchestrator | Friday 29 August 2025 17:20:06 +0000 (0:00:02.693) 0:01:01.362 ********* 2025-08-29 17:22:08.018946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.018963 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.018975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.018987 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.018998 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.019009 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.019027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:22:08.019038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.019057 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.019069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.019080 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.019092 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.019113 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.019130 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.019142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.019154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.019171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.019183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.019195 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.019207 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.019218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:22:08.019234 | orchestrator | 2025-08-29 17:22:08.019246 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-08-29 17:22:08.019257 | orchestrator | Friday 29 August 2025 17:20:10 +0000 (0:00:03.527) 0:01:04.890 ********* 2025-08-29 17:22:08.019268 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:08.019293 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:08.019304 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:08.019315 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:08.019326 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:22:08.019341 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:22:08.019352 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:22:08.019363 | orchestrator | 2025-08-29 17:22:08.019373 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-08-29 17:22:08.019384 | orchestrator | Friday 29 August 2025 17:20:13 +0000 (0:00:02.852) 0:01:07.743 ********* 2025-08-29 17:22:08.019395 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:08.019406 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:08.019417 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:08.019427 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:08.019438 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:22:08.019448 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:22:08.019459 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:22:08.019469 | orchestrator | 2025-08-29 17:22:08.019480 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 17:22:08.019491 | orchestrator | Friday 29 August 2025 17:20:14 +0000 (0:00:01.433) 0:01:09.176 ********* 2025-08-29 17:22:08.019502 | orchestrator | 2025-08-29 17:22:08.019512 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 17:22:08.019523 | orchestrator | Friday 29 August 2025 17:20:14 +0000 (0:00:00.066) 0:01:09.243 ********* 2025-08-29 17:22:08.019534 | orchestrator | 2025-08-29 17:22:08.019544 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 17:22:08.019555 | orchestrator | Friday 29 August 2025 17:20:14 +0000 (0:00:00.070) 0:01:09.313 ********* 2025-08-29 17:22:08.019566 | orchestrator | 2025-08-29 17:22:08.019576 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 17:22:08.019587 | orchestrator | Friday 29 August 2025 17:20:14 +0000 (0:00:00.069) 0:01:09.383 ********* 2025-08-29 17:22:08.019598 | orchestrator | 2025-08-29 17:22:08.019608 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 17:22:08.019619 | orchestrator | Friday 29 August 2025 17:20:15 +0000 (0:00:00.242) 0:01:09.626 ********* 2025-08-29 17:22:08.019630 | orchestrator | 2025-08-29 17:22:08.019640 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 17:22:08.019651 | orchestrator | Friday 29 August 2025 17:20:15 +0000 (0:00:00.066) 0:01:09.692 ********* 2025-08-29 17:22:08.019661 | orchestrator | 2025-08-29 17:22:08.019672 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 17:22:08.019683 | orchestrator | Friday 29 August 2025 17:20:15 +0000 (0:00:00.067) 0:01:09.759 ********* 2025-08-29 17:22:08.019694 | orchestrator | 2025-08-29 17:22:08.019705 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-08-29 17:22:08.019722 | orchestrator | Friday 29 August 2025 17:20:15 +0000 (0:00:00.086) 0:01:09.846 ********* 2025-08-29 17:22:08.019733 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:08.019744 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:08.019754 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:08.019765 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:22:08.019776 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:22:08.019787 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:08.019797 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:22:08.019808 | orchestrator | 2025-08-29 17:22:08.019819 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-08-29 17:22:08.019830 | orchestrator | Friday 29 August 2025 17:21:04 +0000 (0:00:49.023) 0:01:58.869 ********* 2025-08-29 17:22:08.019847 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:08.019858 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:22:08.019868 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:22:08.019879 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:08.019890 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:08.019900 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:22:08.019911 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:08.019922 | orchestrator | 2025-08-29 17:22:08.019932 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-08-29 17:22:08.019943 | orchestrator | Friday 29 August 2025 17:21:53 +0000 (0:00:48.917) 0:02:47.787 ********* 2025-08-29 17:22:08.019954 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:08.019965 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:08.019975 | orchestrator | ok: [testbed-manager] 2025-08-29 17:22:08.019986 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:08.019997 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:22:08.020008 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:22:08.020019 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:22:08.020029 | orchestrator | 2025-08-29 17:22:08.020040 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-08-29 17:22:08.020051 | orchestrator | Friday 29 August 2025 17:21:55 +0000 (0:00:02.115) 0:02:49.902 ********* 2025-08-29 17:22:08.020061 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:08.020072 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:08.020083 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:08.020094 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:08.020104 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:22:08.020115 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:22:08.020125 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:22:08.020136 | orchestrator | 2025-08-29 17:22:08.020147 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:22:08.020159 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 17:22:08.020170 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 17:22:08.020181 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 17:22:08.020192 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 17:22:08.020208 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 17:22:08.020219 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 17:22:08.020230 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 17:22:08.020241 | orchestrator | 2025-08-29 17:22:08.020252 | orchestrator | 2025-08-29 17:22:08.020263 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:22:08.020290 | orchestrator | Friday 29 August 2025 17:22:05 +0000 (0:00:09.758) 0:02:59.661 ********* 2025-08-29 17:22:08.020302 | orchestrator | =============================================================================== 2025-08-29 17:22:08.020313 | orchestrator | common : Restart fluentd container ------------------------------------- 49.02s 2025-08-29 17:22:08.020323 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 48.92s 2025-08-29 17:22:08.020334 | orchestrator | common : Restart cron container ----------------------------------------- 9.76s 2025-08-29 17:22:08.020351 | orchestrator | common : Copying over config.json files for services -------------------- 8.88s 2025-08-29 17:22:08.020362 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.67s 2025-08-29 17:22:08.020373 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.20s 2025-08-29 17:22:08.020383 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.09s 2025-08-29 17:22:08.020394 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.58s 2025-08-29 17:22:08.020405 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.47s 2025-08-29 17:22:08.020415 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.46s 2025-08-29 17:22:08.020426 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.19s 2025-08-29 17:22:08.020437 | orchestrator | common : Check common containers ---------------------------------------- 3.53s 2025-08-29 17:22:08.020447 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.17s 2025-08-29 17:22:08.020458 | orchestrator | common : Creating log volume -------------------------------------------- 2.85s 2025-08-29 17:22:08.020475 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.69s 2025-08-29 17:22:08.020486 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.12s 2025-08-29 17:22:08.020497 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.02s 2025-08-29 17:22:08.020507 | orchestrator | common : include_tasks -------------------------------------------------- 1.76s 2025-08-29 17:22:08.020518 | orchestrator | common : include_tasks -------------------------------------------------- 1.64s 2025-08-29 17:22:08.020529 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.54s 2025-08-29 17:22:08.020540 | orchestrator | 2025-08-29 17:22:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:11.107646 | orchestrator | 2025-08-29 17:22:11 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:11.107770 | orchestrator | 2025-08-29 17:22:11 | INFO  | Task 9ac05191-d5ba-486a-8d99-2db3dd79f23f is in state STARTED 2025-08-29 17:22:11.107787 | orchestrator | 2025-08-29 17:22:11 | INFO  | Task 8006091f-b287-491f-8496-3df80775e12b is in state STARTED 2025-08-29 17:22:11.107799 | orchestrator | 2025-08-29 17:22:11 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:11.107810 | orchestrator | 2025-08-29 17:22:11 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:11.107821 | orchestrator | 2025-08-29 17:22:11 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:11.107832 | orchestrator | 2025-08-29 17:22:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:14.144211 | orchestrator | 2025-08-29 17:22:14 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:14.144532 | orchestrator | 2025-08-29 17:22:14 | INFO  | Task 9ac05191-d5ba-486a-8d99-2db3dd79f23f is in state STARTED 2025-08-29 17:22:14.145421 | orchestrator | 2025-08-29 17:22:14 | INFO  | Task 8006091f-b287-491f-8496-3df80775e12b is in state STARTED 2025-08-29 17:22:14.146149 | orchestrator | 2025-08-29 17:22:14 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:14.147289 | orchestrator | 2025-08-29 17:22:14 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:14.148586 | orchestrator | 2025-08-29 17:22:14 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:14.148608 | orchestrator | 2025-08-29 17:22:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:17.210452 | orchestrator | 2025-08-29 17:22:17 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:17.210664 | orchestrator | 2025-08-29 17:22:17 | INFO  | Task 9ac05191-d5ba-486a-8d99-2db3dd79f23f is in state STARTED 2025-08-29 17:22:17.211129 | orchestrator | 2025-08-29 17:22:17 | INFO  | Task 8006091f-b287-491f-8496-3df80775e12b is in state STARTED 2025-08-29 17:22:17.211781 | orchestrator | 2025-08-29 17:22:17 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:17.212597 | orchestrator | 2025-08-29 17:22:17 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:17.213467 | orchestrator | 2025-08-29 17:22:17 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:17.213502 | orchestrator | 2025-08-29 17:22:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:20.253495 | orchestrator | 2025-08-29 17:22:20 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:20.254224 | orchestrator | 2025-08-29 17:22:20 | INFO  | Task 9ac05191-d5ba-486a-8d99-2db3dd79f23f is in state STARTED 2025-08-29 17:22:20.254943 | orchestrator | 2025-08-29 17:22:20 | INFO  | Task 8006091f-b287-491f-8496-3df80775e12b is in state STARTED 2025-08-29 17:22:20.255731 | orchestrator | 2025-08-29 17:22:20 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:20.256565 | orchestrator | 2025-08-29 17:22:20 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:20.257876 | orchestrator | 2025-08-29 17:22:20 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:20.257905 | orchestrator | 2025-08-29 17:22:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:23.330324 | orchestrator | 2025-08-29 17:22:23 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:23.330764 | orchestrator | 2025-08-29 17:22:23 | INFO  | Task 9ac05191-d5ba-486a-8d99-2db3dd79f23f is in state STARTED 2025-08-29 17:22:23.331584 | orchestrator | 2025-08-29 17:22:23 | INFO  | Task 8006091f-b287-491f-8496-3df80775e12b is in state STARTED 2025-08-29 17:22:23.334449 | orchestrator | 2025-08-29 17:22:23 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:23.334556 | orchestrator | 2025-08-29 17:22:23 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:23.335522 | orchestrator | 2025-08-29 17:22:23 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:23.336853 | orchestrator | 2025-08-29 17:22:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:26.419413 | orchestrator | 2025-08-29 17:22:26 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:26.419513 | orchestrator | 2025-08-29 17:22:26 | INFO  | Task 9ac05191-d5ba-486a-8d99-2db3dd79f23f is in state STARTED 2025-08-29 17:22:26.419526 | orchestrator | 2025-08-29 17:22:26 | INFO  | Task 8006091f-b287-491f-8496-3df80775e12b is in state STARTED 2025-08-29 17:22:26.419537 | orchestrator | 2025-08-29 17:22:26 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:26.419547 | orchestrator | 2025-08-29 17:22:26 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:26.419558 | orchestrator | 2025-08-29 17:22:26 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:26.419568 | orchestrator | 2025-08-29 17:22:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:29.553452 | orchestrator | 2025-08-29 17:22:29 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:29.553566 | orchestrator | 2025-08-29 17:22:29 | INFO  | Task 9ac05191-d5ba-486a-8d99-2db3dd79f23f is in state SUCCESS 2025-08-29 17:22:29.553581 | orchestrator | 2025-08-29 17:22:29 | INFO  | Task 8006091f-b287-491f-8496-3df80775e12b is in state STARTED 2025-08-29 17:22:29.553593 | orchestrator | 2025-08-29 17:22:29 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:29.555471 | orchestrator | 2025-08-29 17:22:29 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:29.556747 | orchestrator | 2025-08-29 17:22:29 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:29.556834 | orchestrator | 2025-08-29 17:22:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:32.597215 | orchestrator | 2025-08-29 17:22:32 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:32.597824 | orchestrator | 2025-08-29 17:22:32 | INFO  | Task 8006091f-b287-491f-8496-3df80775e12b is in state STARTED 2025-08-29 17:22:32.598658 | orchestrator | 2025-08-29 17:22:32 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:32.599885 | orchestrator | 2025-08-29 17:22:32 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:32.601106 | orchestrator | 2025-08-29 17:22:32 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:22:32.601677 | orchestrator | 2025-08-29 17:22:32 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:32.601697 | orchestrator | 2025-08-29 17:22:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:35.665181 | orchestrator | 2025-08-29 17:22:35 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:35.666725 | orchestrator | 2025-08-29 17:22:35 | INFO  | Task 8006091f-b287-491f-8496-3df80775e12b is in state STARTED 2025-08-29 17:22:35.669089 | orchestrator | 2025-08-29 17:22:35 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:35.671199 | orchestrator | 2025-08-29 17:22:35 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:35.671787 | orchestrator | 2025-08-29 17:22:35 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:22:35.673709 | orchestrator | 2025-08-29 17:22:35 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:35.673751 | orchestrator | 2025-08-29 17:22:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:38.703389 | orchestrator | 2025-08-29 17:22:38 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:38.703469 | orchestrator | 2025-08-29 17:22:38 | INFO  | Task 8006091f-b287-491f-8496-3df80775e12b is in state STARTED 2025-08-29 17:22:38.703892 | orchestrator | 2025-08-29 17:22:38 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:38.704414 | orchestrator | 2025-08-29 17:22:38 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:38.707545 | orchestrator | 2025-08-29 17:22:38 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:22:38.708056 | orchestrator | 2025-08-29 17:22:38 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:38.708080 | orchestrator | 2025-08-29 17:22:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:41.838626 | orchestrator | 2025-08-29 17:22:41 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:41.951595 | orchestrator | 2025-08-29 17:22:41.951665 | orchestrator | 2025-08-29 17:22:41.951679 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:22:41.951692 | orchestrator | 2025-08-29 17:22:41.951703 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:22:41.951714 | orchestrator | Friday 29 August 2025 17:22:12 +0000 (0:00:00.489) 0:00:00.489 ********* 2025-08-29 17:22:41.951725 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:41.951737 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:41.951748 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:41.951758 | orchestrator | 2025-08-29 17:22:41.951769 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:22:41.951780 | orchestrator | Friday 29 August 2025 17:22:13 +0000 (0:00:00.573) 0:00:01.062 ********* 2025-08-29 17:22:41.951791 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-08-29 17:22:41.951802 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-08-29 17:22:41.951813 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-08-29 17:22:41.951823 | orchestrator | 2025-08-29 17:22:41.951834 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-08-29 17:22:41.951845 | orchestrator | 2025-08-29 17:22:41.951856 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-08-29 17:22:41.951867 | orchestrator | Friday 29 August 2025 17:22:13 +0000 (0:00:00.903) 0:00:01.966 ********* 2025-08-29 17:22:41.951878 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:22:41.951889 | orchestrator | 2025-08-29 17:22:41.951900 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-08-29 17:22:41.951911 | orchestrator | Friday 29 August 2025 17:22:14 +0000 (0:00:00.925) 0:00:02.892 ********* 2025-08-29 17:22:41.951922 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-08-29 17:22:41.951933 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-08-29 17:22:41.951944 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-08-29 17:22:41.951954 | orchestrator | 2025-08-29 17:22:41.951965 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-08-29 17:22:41.951976 | orchestrator | Friday 29 August 2025 17:22:15 +0000 (0:00:01.025) 0:00:03.917 ********* 2025-08-29 17:22:41.951986 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-08-29 17:22:41.952005 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-08-29 17:22:41.952016 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-08-29 17:22:41.952026 | orchestrator | 2025-08-29 17:22:41.952037 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-08-29 17:22:41.952048 | orchestrator | Friday 29 August 2025 17:22:18 +0000 (0:00:02.992) 0:00:06.910 ********* 2025-08-29 17:22:41.952059 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:41.952069 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:41.952080 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:41.952090 | orchestrator | 2025-08-29 17:22:41.952101 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-08-29 17:22:41.952112 | orchestrator | Friday 29 August 2025 17:22:21 +0000 (0:00:02.756) 0:00:09.666 ********* 2025-08-29 17:22:41.952123 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:41.952133 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:41.952144 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:41.952156 | orchestrator | 2025-08-29 17:22:41.952168 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:22:41.952180 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:22:41.952192 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:22:41.952220 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:22:41.952232 | orchestrator | 2025-08-29 17:22:41.952244 | orchestrator | 2025-08-29 17:22:41.952257 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:22:41.952269 | orchestrator | Friday 29 August 2025 17:22:28 +0000 (0:00:06.853) 0:00:16.520 ********* 2025-08-29 17:22:41.952303 | orchestrator | =============================================================================== 2025-08-29 17:22:41.952315 | orchestrator | memcached : Restart memcached container --------------------------------- 6.85s 2025-08-29 17:22:41.952326 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.99s 2025-08-29 17:22:41.952338 | orchestrator | memcached : Check memcached container ----------------------------------- 2.76s 2025-08-29 17:22:41.952350 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.03s 2025-08-29 17:22:41.952362 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.93s 2025-08-29 17:22:41.952373 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.90s 2025-08-29 17:22:41.952385 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.57s 2025-08-29 17:22:41.952397 | orchestrator | 2025-08-29 17:22:41.952408 | orchestrator | 2025-08-29 17:22:41.952420 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:22:41.952432 | orchestrator | 2025-08-29 17:22:41.952443 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:22:41.952455 | orchestrator | Friday 29 August 2025 17:22:13 +0000 (0:00:00.874) 0:00:00.874 ********* 2025-08-29 17:22:41.952466 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:41.952478 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:41.952490 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:41.952501 | orchestrator | 2025-08-29 17:22:41.952512 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:22:41.952537 | orchestrator | Friday 29 August 2025 17:22:14 +0000 (0:00:00.578) 0:00:01.452 ********* 2025-08-29 17:22:41.952548 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-08-29 17:22:41.952559 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-08-29 17:22:41.952570 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-08-29 17:22:41.952580 | orchestrator | 2025-08-29 17:22:41.952591 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-08-29 17:22:41.952602 | orchestrator | 2025-08-29 17:22:41.952612 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-08-29 17:22:41.952623 | orchestrator | Friday 29 August 2025 17:22:14 +0000 (0:00:00.796) 0:00:02.249 ********* 2025-08-29 17:22:41.952634 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:22:41.952645 | orchestrator | 2025-08-29 17:22:41.952655 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-08-29 17:22:41.952666 | orchestrator | Friday 29 August 2025 17:22:15 +0000 (0:00:00.967) 0:00:03.217 ********* 2025-08-29 17:22:41.952679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952773 | orchestrator | 2025-08-29 17:22:41.952784 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-08-29 17:22:41.952796 | orchestrator | Friday 29 August 2025 17:22:17 +0000 (0:00:02.030) 0:00:05.248 ********* 2025-08-29 17:22:41.952807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952892 | orchestrator | 2025-08-29 17:22:41.952903 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-08-29 17:22:41.952914 | orchestrator | Friday 29 August 2025 17:22:21 +0000 (0:00:03.748) 0:00:08.996 ********* 2025-08-29 17:22:41.952926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.952993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.953004 | orchestrator | 2025-08-29 17:22:41.953020 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-08-29 17:22:41.953031 | orchestrator | Friday 29 August 2025 17:22:25 +0000 (0:00:03.670) 0:00:12.667 ********* 2025-08-29 17:22:41.953042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.953053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.953070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.953081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.953093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.953109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:22:41.953121 | orchestrator | 2025-08-29 17:22:41.953133 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 17:22:41.953143 | orchestrator | Friday 29 August 2025 17:22:27 +0000 (0:00:02.155) 0:00:14.823 ********* 2025-08-29 17:22:41.953154 | orchestrator | 2025-08-29 17:22:41.953166 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 17:22:41.953182 | orchestrator | Friday 29 August 2025 17:22:27 +0000 (0:00:00.085) 0:00:14.908 ********* 2025-08-29 17:22:41.953193 | orchestrator | 2025-08-29 17:22:41.953204 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 17:22:41.953215 | orchestrator | Friday 29 August 2025 17:22:27 +0000 (0:00:00.070) 0:00:14.978 ********* 2025-08-29 17:22:41.953225 | orchestrator | 2025-08-29 17:22:41.953236 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-08-29 17:22:41.953247 | orchestrator | Friday 29 August 2025 17:22:27 +0000 (0:00:00.056) 0:00:15.035 ********* 2025-08-29 17:22:41.953264 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:41.953310 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:41.953322 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:41.953332 | orchestrator | 2025-08-29 17:22:41.953343 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-08-29 17:22:41.953354 | orchestrator | Friday 29 August 2025 17:22:31 +0000 (0:00:03.786) 0:00:18.821 ********* 2025-08-29 17:22:41.953364 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:41.953375 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:41.953386 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:41.953396 | orchestrator | 2025-08-29 17:22:41.953407 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:22:41.953418 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:22:41.953429 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:22:41.953440 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:22:41.953451 | orchestrator | 2025-08-29 17:22:41.953461 | orchestrator | 2025-08-29 17:22:41.953472 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:22:41.953483 | orchestrator | Friday 29 August 2025 17:22:41 +0000 (0:00:09.699) 0:00:28.521 ********* 2025-08-29 17:22:41.953493 | orchestrator | =============================================================================== 2025-08-29 17:22:41.953504 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.70s 2025-08-29 17:22:41.953519 | orchestrator | redis : Restart redis container ----------------------------------------- 3.79s 2025-08-29 17:22:41.953530 | orchestrator | redis : Copying over default config.json files -------------------------- 3.75s 2025-08-29 17:22:41.953540 | orchestrator | redis : Copying over redis config files --------------------------------- 3.67s 2025-08-29 17:22:41.953551 | orchestrator | redis : Check redis containers ------------------------------------------ 2.16s 2025-08-29 17:22:41.953562 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.03s 2025-08-29 17:22:41.953576 | orchestrator | redis : include_tasks --------------------------------------------------- 0.97s 2025-08-29 17:22:41.953587 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2025-08-29 17:22:41.953598 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.58s 2025-08-29 17:22:41.953608 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2025-08-29 17:22:41.953619 | orchestrator | 2025-08-29 17:22:41 | INFO  | Task 8006091f-b287-491f-8496-3df80775e12b is in state SUCCESS 2025-08-29 17:22:41.962805 | orchestrator | 2025-08-29 17:22:41 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:41.965910 | orchestrator | 2025-08-29 17:22:41 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:41.968459 | orchestrator | 2025-08-29 17:22:41 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:22:42.026955 | orchestrator | 2025-08-29 17:22:42 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:42.027019 | orchestrator | 2025-08-29 17:22:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:45.264545 | orchestrator | 2025-08-29 17:22:45 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:45.265061 | orchestrator | 2025-08-29 17:22:45 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:45.265506 | orchestrator | 2025-08-29 17:22:45 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:45.266170 | orchestrator | 2025-08-29 17:22:45 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:22:45.267201 | orchestrator | 2025-08-29 17:22:45 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:45.267222 | orchestrator | 2025-08-29 17:22:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:48.466237 | orchestrator | 2025-08-29 17:22:48 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:48.466389 | orchestrator | 2025-08-29 17:22:48 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:48.466487 | orchestrator | 2025-08-29 17:22:48 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:48.471433 | orchestrator | 2025-08-29 17:22:48 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:22:48.471462 | orchestrator | 2025-08-29 17:22:48 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:48.471473 | orchestrator | 2025-08-29 17:22:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:51.589618 | orchestrator | 2025-08-29 17:22:51 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state STARTED 2025-08-29 17:22:51.589708 | orchestrator | 2025-08-29 17:22:51 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:51.589721 | orchestrator | 2025-08-29 17:22:51 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:51.589732 | orchestrator | 2025-08-29 17:22:51 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:22:51.589744 | orchestrator | 2025-08-29 17:22:51 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:51.589755 | orchestrator | 2025-08-29 17:22:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:54.620885 | orchestrator | 2025-08-29 17:22:54.620997 | orchestrator | 2025-08-29 17:22:54 | INFO  | Task b97b471c-035a-4b1e-9747-4c813875e279 is in state SUCCESS 2025-08-29 17:22:54.622750 | orchestrator | 2025-08-29 17:22:54.622797 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-08-29 17:22:54.622810 | orchestrator | 2025-08-29 17:22:54.622823 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-08-29 17:22:54.622835 | orchestrator | Friday 29 August 2025 17:19:06 +0000 (0:00:00.249) 0:00:00.249 ********* 2025-08-29 17:22:54.622846 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:22:54.622859 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:22:54.622870 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:22:54.622881 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.622892 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.622903 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.622913 | orchestrator | 2025-08-29 17:22:54.622931 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-08-29 17:22:54.622943 | orchestrator | Friday 29 August 2025 17:19:07 +0000 (0:00:01.006) 0:00:01.255 ********* 2025-08-29 17:22:54.622954 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:54.622966 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:54.622976 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:54.622987 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.622998 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.623009 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.623020 | orchestrator | 2025-08-29 17:22:54.623031 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-08-29 17:22:54.623043 | orchestrator | Friday 29 August 2025 17:19:08 +0000 (0:00:00.869) 0:00:02.125 ********* 2025-08-29 17:22:54.623074 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:54.623085 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:54.623096 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:54.623106 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.623117 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.623127 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.623138 | orchestrator | 2025-08-29 17:22:54.623149 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-08-29 17:22:54.623160 | orchestrator | Friday 29 August 2025 17:19:09 +0000 (0:00:00.990) 0:00:03.118 ********* 2025-08-29 17:22:54.623170 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:22:54.623181 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:22:54.623191 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.623202 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:22:54.623213 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:54.623223 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:54.623234 | orchestrator | 2025-08-29 17:22:54.623245 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-08-29 17:22:54.623256 | orchestrator | Friday 29 August 2025 17:19:11 +0000 (0:00:02.396) 0:00:05.515 ********* 2025-08-29 17:22:54.623266 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:22:54.623301 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:22:54.623313 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:22:54.623323 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.623334 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:54.623344 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:54.623355 | orchestrator | 2025-08-29 17:22:54.623366 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-08-29 17:22:54.623376 | orchestrator | Friday 29 August 2025 17:19:13 +0000 (0:00:01.558) 0:00:07.073 ********* 2025-08-29 17:22:54.623387 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:22:54.623398 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:22:54.623408 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:22:54.623419 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.623429 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:54.623440 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:54.623451 | orchestrator | 2025-08-29 17:22:54.623462 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-08-29 17:22:54.623472 | orchestrator | Friday 29 August 2025 17:19:14 +0000 (0:00:01.512) 0:00:08.586 ********* 2025-08-29 17:22:54.623483 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:54.623494 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:54.623504 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:54.623514 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.623525 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.623535 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.623546 | orchestrator | 2025-08-29 17:22:54.623557 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-08-29 17:22:54.623567 | orchestrator | Friday 29 August 2025 17:19:15 +0000 (0:00:00.822) 0:00:09.409 ********* 2025-08-29 17:22:54.623578 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:54.623589 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:54.623599 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:54.623610 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.623620 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.623631 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.623641 | orchestrator | 2025-08-29 17:22:54.623652 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-08-29 17:22:54.623663 | orchestrator | Friday 29 August 2025 17:19:16 +0000 (0:00:01.189) 0:00:10.598 ********* 2025-08-29 17:22:54.623674 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 17:22:54.623684 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 17:22:54.623702 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:54.623713 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 17:22:54.623724 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 17:22:54.623735 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:54.623745 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 17:22:54.623756 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 17:22:54.623766 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:54.623777 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 17:22:54.623801 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 17:22:54.623812 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.623823 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 17:22:54.623834 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 17:22:54.623844 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.623855 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 17:22:54.623866 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 17:22:54.623876 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.623887 | orchestrator | 2025-08-29 17:22:54.623906 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-08-29 17:22:54.623918 | orchestrator | Friday 29 August 2025 17:19:17 +0000 (0:00:00.922) 0:00:11.520 ********* 2025-08-29 17:22:54.623928 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:54.623939 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:54.623950 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:54.623961 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.623971 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.623981 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.623992 | orchestrator | 2025-08-29 17:22:54.624003 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-08-29 17:22:54.624014 | orchestrator | Friday 29 August 2025 17:19:19 +0000 (0:00:01.496) 0:00:13.017 ********* 2025-08-29 17:22:54.624024 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:22:54.624035 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:22:54.624045 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:22:54.624056 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.624066 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.624077 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.624087 | orchestrator | 2025-08-29 17:22:54.624098 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-08-29 17:22:54.624109 | orchestrator | Friday 29 August 2025 17:19:20 +0000 (0:00:00.918) 0:00:13.935 ********* 2025-08-29 17:22:54.624120 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:54.624130 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.624141 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:54.624151 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:22:54.624162 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:22:54.624172 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:22:54.624183 | orchestrator | 2025-08-29 17:22:54.624194 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-08-29 17:22:54.624205 | orchestrator | Friday 29 August 2025 17:19:26 +0000 (0:00:05.870) 0:00:19.805 ********* 2025-08-29 17:22:54.624215 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:54.624226 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:54.624236 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:54.624247 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.624258 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.624289 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.624300 | orchestrator | 2025-08-29 17:22:54.624311 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-08-29 17:22:54.624322 | orchestrator | Friday 29 August 2025 17:19:27 +0000 (0:00:01.318) 0:00:21.124 ********* 2025-08-29 17:22:54.624333 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:54.624343 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:54.624354 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:54.624364 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.624375 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.624385 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.624396 | orchestrator | 2025-08-29 17:22:54.624407 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-08-29 17:22:54.624420 | orchestrator | Friday 29 August 2025 17:19:29 +0000 (0:00:02.277) 0:00:23.402 ********* 2025-08-29 17:22:54.624431 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:22:54.624441 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:22:54.624451 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:22:54.624462 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.624473 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.624483 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.624494 | orchestrator | 2025-08-29 17:22:54.624505 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-08-29 17:22:54.624516 | orchestrator | Friday 29 August 2025 17:19:30 +0000 (0:00:01.303) 0:00:24.706 ********* 2025-08-29 17:22:54.624526 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-08-29 17:22:54.624538 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-08-29 17:22:54.624549 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-08-29 17:22:54.624559 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-08-29 17:22:54.624570 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-08-29 17:22:54.624580 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-08-29 17:22:54.624591 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-08-29 17:22:54.624602 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-08-29 17:22:54.624612 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-08-29 17:22:54.624623 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-08-29 17:22:54.624633 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-08-29 17:22:54.624644 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-08-29 17:22:54.624654 | orchestrator | 2025-08-29 17:22:54.624665 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-08-29 17:22:54.624676 | orchestrator | Friday 29 August 2025 17:19:33 +0000 (0:00:02.566) 0:00:27.273 ********* 2025-08-29 17:22:54.624686 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:22:54.624697 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:22:54.624707 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:22:54.624718 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.624729 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:54.624740 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:54.624751 | orchestrator | 2025-08-29 17:22:54.624768 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-08-29 17:22:54.624779 | orchestrator | 2025-08-29 17:22:54.624790 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-08-29 17:22:54.624801 | orchestrator | Friday 29 August 2025 17:19:35 +0000 (0:00:02.141) 0:00:29.415 ********* 2025-08-29 17:22:54.624811 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.624822 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.624833 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.624843 | orchestrator | 2025-08-29 17:22:54.624854 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-08-29 17:22:54.624865 | orchestrator | Friday 29 August 2025 17:19:36 +0000 (0:00:01.271) 0:00:30.686 ********* 2025-08-29 17:22:54.624886 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.624897 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.624907 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.624918 | orchestrator | 2025-08-29 17:22:54.624929 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-08-29 17:22:54.624940 | orchestrator | Friday 29 August 2025 17:19:38 +0000 (0:00:01.329) 0:00:32.016 ********* 2025-08-29 17:22:54.624950 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.624961 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.624972 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.624983 | orchestrator | 2025-08-29 17:22:54.624993 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-08-29 17:22:54.625004 | orchestrator | Friday 29 August 2025 17:19:39 +0000 (0:00:00.938) 0:00:32.955 ********* 2025-08-29 17:22:54.625015 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.625026 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.625036 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.625047 | orchestrator | 2025-08-29 17:22:54.625058 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-08-29 17:22:54.625068 | orchestrator | Friday 29 August 2025 17:19:41 +0000 (0:00:02.239) 0:00:35.194 ********* 2025-08-29 17:22:54.625079 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.625090 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.625101 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.625111 | orchestrator | 2025-08-29 17:22:54.625122 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-08-29 17:22:54.625133 | orchestrator | Friday 29 August 2025 17:19:41 +0000 (0:00:00.526) 0:00:35.720 ********* 2025-08-29 17:22:54.625143 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.625154 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.625165 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.625175 | orchestrator | 2025-08-29 17:22:54.625186 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-08-29 17:22:54.625197 | orchestrator | Friday 29 August 2025 17:19:44 +0000 (0:00:02.032) 0:00:37.753 ********* 2025-08-29 17:22:54.625208 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:54.625218 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:54.625229 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.625240 | orchestrator | 2025-08-29 17:22:54.625250 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-08-29 17:22:54.625261 | orchestrator | Friday 29 August 2025 17:19:46 +0000 (0:00:02.324) 0:00:40.078 ********* 2025-08-29 17:22:54.625272 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:22:54.625302 | orchestrator | 2025-08-29 17:22:54.625313 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-08-29 17:22:54.625324 | orchestrator | Friday 29 August 2025 17:19:47 +0000 (0:00:00.910) 0:00:40.988 ********* 2025-08-29 17:22:54.625335 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.625345 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.625356 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.625366 | orchestrator | 2025-08-29 17:22:54.625377 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-08-29 17:22:54.625388 | orchestrator | Friday 29 August 2025 17:19:50 +0000 (0:00:03.315) 0:00:44.303 ********* 2025-08-29 17:22:54.625399 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.625409 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.625420 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.625430 | orchestrator | 2025-08-29 17:22:54.625441 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-08-29 17:22:54.625452 | orchestrator | Friday 29 August 2025 17:19:52 +0000 (0:00:01.659) 0:00:45.962 ********* 2025-08-29 17:22:54.625463 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.625474 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.625489 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.625500 | orchestrator | 2025-08-29 17:22:54.625511 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-08-29 17:22:54.625522 | orchestrator | Friday 29 August 2025 17:19:53 +0000 (0:00:01.346) 0:00:47.309 ********* 2025-08-29 17:22:54.625532 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.625543 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.625554 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.625564 | orchestrator | 2025-08-29 17:22:54.625575 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-08-29 17:22:54.625586 | orchestrator | Friday 29 August 2025 17:19:55 +0000 (0:00:01.878) 0:00:49.187 ********* 2025-08-29 17:22:54.625596 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.625607 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.625617 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.625628 | orchestrator | 2025-08-29 17:22:54.625639 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-08-29 17:22:54.625649 | orchestrator | Friday 29 August 2025 17:19:56 +0000 (0:00:00.708) 0:00:49.895 ********* 2025-08-29 17:22:54.625660 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.625671 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.625682 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.625692 | orchestrator | 2025-08-29 17:22:54.625703 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-08-29 17:22:54.625714 | orchestrator | Friday 29 August 2025 17:19:56 +0000 (0:00:00.803) 0:00:50.699 ********* 2025-08-29 17:22:54.625725 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.625735 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:54.625746 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:54.625757 | orchestrator | 2025-08-29 17:22:54.625774 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-08-29 17:22:54.625785 | orchestrator | Friday 29 August 2025 17:20:00 +0000 (0:00:03.921) 0:00:54.621 ********* 2025-08-29 17:22:54.625796 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 17:22:54.625807 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 17:22:54.625823 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 17:22:54.625834 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 17:22:54.625845 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 17:22:54.625856 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 17:22:54.625867 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 17:22:54.625877 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 17:22:54.625888 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 17:22:54.625942 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 17:22:54.625955 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 17:22:54.625965 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 17:22:54.625987 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.625998 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.626009 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.626079 | orchestrator | 2025-08-29 17:22:54.626093 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-08-29 17:22:54.626103 | orchestrator | Friday 29 August 2025 17:20:44 +0000 (0:00:43.845) 0:01:38.466 ********* 2025-08-29 17:22:54.626114 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.626125 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.626136 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.626147 | orchestrator | 2025-08-29 17:22:54.626158 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-08-29 17:22:54.626169 | orchestrator | Friday 29 August 2025 17:20:45 +0000 (0:00:00.301) 0:01:38.768 ********* 2025-08-29 17:22:54.626179 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.626190 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:54.626201 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:54.626212 | orchestrator | 2025-08-29 17:22:54.626223 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-08-29 17:22:54.626234 | orchestrator | Friday 29 August 2025 17:20:46 +0000 (0:00:00.983) 0:01:39.751 ********* 2025-08-29 17:22:54.626245 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.626255 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:54.626266 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:54.626312 | orchestrator | 2025-08-29 17:22:54.626324 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-08-29 17:22:54.626334 | orchestrator | Friday 29 August 2025 17:20:47 +0000 (0:00:01.395) 0:01:41.147 ********* 2025-08-29 17:22:54.626345 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:54.626356 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:54.626366 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.626377 | orchestrator | 2025-08-29 17:22:54.626388 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-08-29 17:22:54.626398 | orchestrator | Friday 29 August 2025 17:21:11 +0000 (0:00:24.231) 0:02:05.379 ********* 2025-08-29 17:22:54.626409 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.626420 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.626431 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.626441 | orchestrator | 2025-08-29 17:22:54.626452 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-08-29 17:22:54.626463 | orchestrator | Friday 29 August 2025 17:21:12 +0000 (0:00:00.792) 0:02:06.172 ********* 2025-08-29 17:22:54.626474 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.626484 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.626495 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.626505 | orchestrator | 2025-08-29 17:22:54.626516 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-08-29 17:22:54.626527 | orchestrator | Friday 29 August 2025 17:21:13 +0000 (0:00:00.714) 0:02:06.887 ********* 2025-08-29 17:22:54.626538 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.626548 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:54.626559 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:54.626569 | orchestrator | 2025-08-29 17:22:54.626580 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-08-29 17:22:54.626591 | orchestrator | Friday 29 August 2025 17:21:13 +0000 (0:00:00.720) 0:02:07.608 ********* 2025-08-29 17:22:54.626602 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.626621 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.626632 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.626642 | orchestrator | 2025-08-29 17:22:54.626653 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-08-29 17:22:54.626664 | orchestrator | Friday 29 August 2025 17:21:14 +0000 (0:00:01.034) 0:02:08.642 ********* 2025-08-29 17:22:54.626683 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.626694 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.626704 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.626715 | orchestrator | 2025-08-29 17:22:54.626726 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-08-29 17:22:54.626736 | orchestrator | Friday 29 August 2025 17:21:15 +0000 (0:00:00.544) 0:02:09.186 ********* 2025-08-29 17:22:54.626753 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.626764 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:54.626775 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:54.626785 | orchestrator | 2025-08-29 17:22:54.626796 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-08-29 17:22:54.626807 | orchestrator | Friday 29 August 2025 17:21:16 +0000 (0:00:00.896) 0:02:10.082 ********* 2025-08-29 17:22:54.626817 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.626828 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:54.626839 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:54.626849 | orchestrator | 2025-08-29 17:22:54.626860 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-08-29 17:22:54.626871 | orchestrator | Friday 29 August 2025 17:21:17 +0000 (0:00:00.681) 0:02:10.763 ********* 2025-08-29 17:22:54.626881 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.626892 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:54.626903 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:54.626913 | orchestrator | 2025-08-29 17:22:54.626924 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-08-29 17:22:54.626935 | orchestrator | Friday 29 August 2025 17:21:18 +0000 (0:00:01.266) 0:02:12.030 ********* 2025-08-29 17:22:54.626946 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:22:54.626956 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:22:54.626967 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:22:54.626977 | orchestrator | 2025-08-29 17:22:54.626988 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-08-29 17:22:54.626999 | orchestrator | Friday 29 August 2025 17:21:19 +0000 (0:00:00.945) 0:02:12.976 ********* 2025-08-29 17:22:54.627010 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.627020 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.627031 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.627042 | orchestrator | 2025-08-29 17:22:54.627052 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-08-29 17:22:54.627063 | orchestrator | Friday 29 August 2025 17:21:19 +0000 (0:00:00.328) 0:02:13.304 ********* 2025-08-29 17:22:54.627074 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.627085 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.627096 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.627106 | orchestrator | 2025-08-29 17:22:54.627117 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-08-29 17:22:54.627128 | orchestrator | Friday 29 August 2025 17:21:19 +0000 (0:00:00.343) 0:02:13.648 ********* 2025-08-29 17:22:54.627139 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.627150 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.627160 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.627171 | orchestrator | 2025-08-29 17:22:54.627182 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-08-29 17:22:54.627192 | orchestrator | Friday 29 August 2025 17:21:20 +0000 (0:00:00.961) 0:02:14.610 ********* 2025-08-29 17:22:54.627203 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.627214 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.627225 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.627235 | orchestrator | 2025-08-29 17:22:54.627246 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-08-29 17:22:54.627257 | orchestrator | Friday 29 August 2025 17:21:21 +0000 (0:00:00.627) 0:02:15.237 ********* 2025-08-29 17:22:54.627268 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 17:22:54.627302 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 17:22:54.627314 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 17:22:54.627325 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 17:22:54.627336 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 17:22:54.627347 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 17:22:54.627357 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 17:22:54.627368 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 17:22:54.627379 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 17:22:54.627390 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-08-29 17:22:54.627401 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 17:22:54.627411 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 17:22:54.627422 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-08-29 17:22:54.627438 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 17:22:54.627450 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 17:22:54.627460 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 17:22:54.627471 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 17:22:54.627482 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 17:22:54.627493 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 17:22:54.627508 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 17:22:54.627519 | orchestrator | 2025-08-29 17:22:54.627530 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-08-29 17:22:54.627541 | orchestrator | 2025-08-29 17:22:54.627552 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-08-29 17:22:54.627563 | orchestrator | Friday 29 August 2025 17:21:24 +0000 (0:00:03.297) 0:02:18.535 ********* 2025-08-29 17:22:54.627573 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:22:54.627584 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:22:54.627595 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:22:54.627606 | orchestrator | 2025-08-29 17:22:54.627616 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-08-29 17:22:54.627627 | orchestrator | Friday 29 August 2025 17:21:25 +0000 (0:00:00.785) 0:02:19.320 ********* 2025-08-29 17:22:54.627638 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:22:54.627649 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:22:54.627659 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:22:54.627670 | orchestrator | 2025-08-29 17:22:54.627681 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-08-29 17:22:54.627692 | orchestrator | Friday 29 August 2025 17:21:26 +0000 (0:00:00.892) 0:02:20.213 ********* 2025-08-29 17:22:54.627703 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:22:54.627713 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:22:54.627724 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:22:54.627735 | orchestrator | 2025-08-29 17:22:54.627746 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-08-29 17:22:54.627763 | orchestrator | Friday 29 August 2025 17:21:26 +0000 (0:00:00.414) 0:02:20.628 ********* 2025-08-29 17:22:54.627774 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:22:54.627785 | orchestrator | 2025-08-29 17:22:54.627796 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-08-29 17:22:54.627807 | orchestrator | Friday 29 August 2025 17:21:27 +0000 (0:00:00.754) 0:02:21.382 ********* 2025-08-29 17:22:54.627818 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:54.627829 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:54.627840 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:54.627850 | orchestrator | 2025-08-29 17:22:54.627861 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-08-29 17:22:54.627872 | orchestrator | Friday 29 August 2025 17:21:28 +0000 (0:00:00.372) 0:02:21.754 ********* 2025-08-29 17:22:54.627883 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:54.627894 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:54.627905 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:54.627915 | orchestrator | 2025-08-29 17:22:54.627926 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-08-29 17:22:54.627937 | orchestrator | Friday 29 August 2025 17:21:28 +0000 (0:00:00.462) 0:02:22.216 ********* 2025-08-29 17:22:54.627948 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:54.627959 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:54.627970 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:54.627980 | orchestrator | 2025-08-29 17:22:54.627991 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-08-29 17:22:54.628002 | orchestrator | Friday 29 August 2025 17:21:29 +0000 (0:00:00.572) 0:02:22.789 ********* 2025-08-29 17:22:54.628013 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:22:54.628024 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:22:54.628034 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:22:54.628045 | orchestrator | 2025-08-29 17:22:54.628056 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-08-29 17:22:54.628067 | orchestrator | Friday 29 August 2025 17:21:29 +0000 (0:00:00.887) 0:02:23.676 ********* 2025-08-29 17:22:54.628077 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:22:54.628088 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:22:54.628099 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:22:54.628110 | orchestrator | 2025-08-29 17:22:54.628121 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-08-29 17:22:54.628131 | orchestrator | Friday 29 August 2025 17:21:31 +0000 (0:00:01.102) 0:02:24.779 ********* 2025-08-29 17:22:54.628142 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:22:54.628153 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:22:54.628164 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:22:54.628174 | orchestrator | 2025-08-29 17:22:54.628185 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-08-29 17:22:54.628196 | orchestrator | Friday 29 August 2025 17:21:32 +0000 (0:00:01.289) 0:02:26.068 ********* 2025-08-29 17:22:54.628207 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:22:54.628217 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:22:54.628228 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:22:54.628239 | orchestrator | 2025-08-29 17:22:54.628250 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-08-29 17:22:54.628260 | orchestrator | 2025-08-29 17:22:54.628271 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-08-29 17:22:54.628298 | orchestrator | Friday 29 August 2025 17:21:45 +0000 (0:00:13.590) 0:02:39.658 ********* 2025-08-29 17:22:54.628309 | orchestrator | ok: [testbed-manager] 2025-08-29 17:22:54.628319 | orchestrator | 2025-08-29 17:22:54.628330 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-08-29 17:22:54.628341 | orchestrator | Friday 29 August 2025 17:21:46 +0000 (0:00:00.782) 0:02:40.441 ********* 2025-08-29 17:22:54.628363 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:54.628374 | orchestrator | 2025-08-29 17:22:54.628385 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 17:22:54.628396 | orchestrator | Friday 29 August 2025 17:21:47 +0000 (0:00:00.453) 0:02:40.895 ********* 2025-08-29 17:22:54.628407 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 17:22:54.628418 | orchestrator | 2025-08-29 17:22:54.628429 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 17:22:54.628439 | orchestrator | Friday 29 August 2025 17:21:47 +0000 (0:00:00.583) 0:02:41.478 ********* 2025-08-29 17:22:54.628450 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:54.628461 | orchestrator | 2025-08-29 17:22:54.628476 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-08-29 17:22:54.628487 | orchestrator | Friday 29 August 2025 17:21:48 +0000 (0:00:00.884) 0:02:42.362 ********* 2025-08-29 17:22:54.628497 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:54.628508 | orchestrator | 2025-08-29 17:22:54.628519 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-08-29 17:22:54.628529 | orchestrator | Friday 29 August 2025 17:21:49 +0000 (0:00:00.609) 0:02:42.972 ********* 2025-08-29 17:22:54.628540 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 17:22:54.628550 | orchestrator | 2025-08-29 17:22:54.628561 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-08-29 17:22:54.628572 | orchestrator | Friday 29 August 2025 17:21:50 +0000 (0:00:01.495) 0:02:44.467 ********* 2025-08-29 17:22:54.628583 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 17:22:54.628593 | orchestrator | 2025-08-29 17:22:54.628604 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-08-29 17:22:54.628615 | orchestrator | Friday 29 August 2025 17:21:51 +0000 (0:00:00.843) 0:02:45.311 ********* 2025-08-29 17:22:54.628625 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:54.628636 | orchestrator | 2025-08-29 17:22:54.628647 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-08-29 17:22:54.628657 | orchestrator | Friday 29 August 2025 17:21:52 +0000 (0:00:00.461) 0:02:45.773 ********* 2025-08-29 17:22:54.628668 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:54.628679 | orchestrator | 2025-08-29 17:22:54.628690 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-08-29 17:22:54.628700 | orchestrator | 2025-08-29 17:22:54.628711 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-08-29 17:22:54.628722 | orchestrator | Friday 29 August 2025 17:21:52 +0000 (0:00:00.672) 0:02:46.445 ********* 2025-08-29 17:22:54.628733 | orchestrator | ok: [testbed-manager] 2025-08-29 17:22:54.628743 | orchestrator | 2025-08-29 17:22:54.628754 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-08-29 17:22:54.628765 | orchestrator | Friday 29 August 2025 17:21:52 +0000 (0:00:00.149) 0:02:46.594 ********* 2025-08-29 17:22:54.628776 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 17:22:54.628786 | orchestrator | 2025-08-29 17:22:54.628797 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-08-29 17:22:54.628808 | orchestrator | Friday 29 August 2025 17:21:53 +0000 (0:00:00.248) 0:02:46.842 ********* 2025-08-29 17:22:54.628819 | orchestrator | ok: [testbed-manager] 2025-08-29 17:22:54.628829 | orchestrator | 2025-08-29 17:22:54.628840 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-08-29 17:22:54.628851 | orchestrator | Friday 29 August 2025 17:21:54 +0000 (0:00:00.972) 0:02:47.815 ********* 2025-08-29 17:22:54.628861 | orchestrator | ok: [testbed-manager] 2025-08-29 17:22:54.628872 | orchestrator | 2025-08-29 17:22:54.628883 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-08-29 17:22:54.628893 | orchestrator | Friday 29 August 2025 17:21:55 +0000 (0:00:01.705) 0:02:49.520 ********* 2025-08-29 17:22:54.628904 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:54.628920 | orchestrator | 2025-08-29 17:22:54.628931 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-08-29 17:22:54.628942 | orchestrator | Friday 29 August 2025 17:21:56 +0000 (0:00:00.973) 0:02:50.493 ********* 2025-08-29 17:22:54.628952 | orchestrator | ok: [testbed-manager] 2025-08-29 17:22:54.628963 | orchestrator | 2025-08-29 17:22:54.628974 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-08-29 17:22:54.628984 | orchestrator | Friday 29 August 2025 17:21:57 +0000 (0:00:00.465) 0:02:50.959 ********* 2025-08-29 17:22:54.628995 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:54.629006 | orchestrator | 2025-08-29 17:22:54.629016 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-08-29 17:22:54.629027 | orchestrator | Friday 29 August 2025 17:22:04 +0000 (0:00:07.521) 0:02:58.480 ********* 2025-08-29 17:22:54.629038 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:54.629049 | orchestrator | 2025-08-29 17:22:54.629059 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-08-29 17:22:54.629070 | orchestrator | Friday 29 August 2025 17:22:19 +0000 (0:00:14.843) 0:03:13.324 ********* 2025-08-29 17:22:54.629081 | orchestrator | ok: [testbed-manager] 2025-08-29 17:22:54.629091 | orchestrator | 2025-08-29 17:22:54.629102 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-08-29 17:22:54.629113 | orchestrator | 2025-08-29 17:22:54.629123 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-08-29 17:22:54.629134 | orchestrator | Friday 29 August 2025 17:22:20 +0000 (0:00:00.603) 0:03:13.927 ********* 2025-08-29 17:22:54.629145 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.629155 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.629166 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.629176 | orchestrator | 2025-08-29 17:22:54.629187 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-08-29 17:22:54.629198 | orchestrator | Friday 29 August 2025 17:22:20 +0000 (0:00:00.491) 0:03:14.419 ********* 2025-08-29 17:22:54.629209 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.629219 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.629230 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.629241 | orchestrator | 2025-08-29 17:22:54.629257 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-08-29 17:22:54.629268 | orchestrator | Friday 29 August 2025 17:22:21 +0000 (0:00:00.362) 0:03:14.781 ********* 2025-08-29 17:22:54.629329 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:22:54.629341 | orchestrator | 2025-08-29 17:22:54.629352 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-08-29 17:22:54.629363 | orchestrator | Friday 29 August 2025 17:22:21 +0000 (0:00:00.914) 0:03:15.695 ********* 2025-08-29 17:22:54.629374 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.629384 | orchestrator | 2025-08-29 17:22:54.629404 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-08-29 17:22:54.629415 | orchestrator | Friday 29 August 2025 17:22:22 +0000 (0:00:00.220) 0:03:15.916 ********* 2025-08-29 17:22:54.629426 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.629436 | orchestrator | 2025-08-29 17:22:54.629447 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-08-29 17:22:54.629458 | orchestrator | Friday 29 August 2025 17:22:22 +0000 (0:00:00.290) 0:03:16.206 ********* 2025-08-29 17:22:54.629468 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.629479 | orchestrator | 2025-08-29 17:22:54.629490 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-08-29 17:22:54.629501 | orchestrator | Friday 29 August 2025 17:22:22 +0000 (0:00:00.237) 0:03:16.443 ********* 2025-08-29 17:22:54.629511 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.629522 | orchestrator | 2025-08-29 17:22:54.629532 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-08-29 17:22:54.629549 | orchestrator | Friday 29 August 2025 17:22:22 +0000 (0:00:00.236) 0:03:16.680 ********* 2025-08-29 17:22:54.629560 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.629570 | orchestrator | 2025-08-29 17:22:54.629581 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-08-29 17:22:54.629592 | orchestrator | Friday 29 August 2025 17:22:23 +0000 (0:00:00.309) 0:03:16.990 ********* 2025-08-29 17:22:54.629603 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.629613 | orchestrator | 2025-08-29 17:22:54.629624 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-08-29 17:22:54.629635 | orchestrator | Friday 29 August 2025 17:22:23 +0000 (0:00:00.250) 0:03:17.240 ********* 2025-08-29 17:22:54.629646 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.629656 | orchestrator | 2025-08-29 17:22:54.629667 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-08-29 17:22:54.629678 | orchestrator | Friday 29 August 2025 17:22:23 +0000 (0:00:00.199) 0:03:17.440 ********* 2025-08-29 17:22:54.629688 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.629699 | orchestrator | 2025-08-29 17:22:54.629710 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-08-29 17:22:54.629720 | orchestrator | Friday 29 August 2025 17:22:23 +0000 (0:00:00.205) 0:03:17.645 ********* 2025-08-29 17:22:54.629731 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.629741 | orchestrator | 2025-08-29 17:22:54.629752 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-08-29 17:22:54.629763 | orchestrator | Friday 29 August 2025 17:22:24 +0000 (0:00:00.567) 0:03:18.212 ********* 2025-08-29 17:22:54.629773 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-08-29 17:22:54.629784 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-08-29 17:22:54.629795 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.629805 | orchestrator | 2025-08-29 17:22:54.629816 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-08-29 17:22:54.629827 | orchestrator | Friday 29 August 2025 17:22:24 +0000 (0:00:00.322) 0:03:18.535 ********* 2025-08-29 17:22:54.629838 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.629847 | orchestrator | 2025-08-29 17:22:54.629857 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-08-29 17:22:54.629866 | orchestrator | Friday 29 August 2025 17:22:25 +0000 (0:00:00.265) 0:03:18.801 ********* 2025-08-29 17:22:54.629876 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.629885 | orchestrator | 2025-08-29 17:22:54.629895 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-08-29 17:22:54.629905 | orchestrator | Friday 29 August 2025 17:22:25 +0000 (0:00:00.235) 0:03:19.036 ********* 2025-08-29 17:22:54.629914 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.629924 | orchestrator | 2025-08-29 17:22:54.629933 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-08-29 17:22:54.629943 | orchestrator | Friday 29 August 2025 17:22:25 +0000 (0:00:00.233) 0:03:19.269 ********* 2025-08-29 17:22:54.629952 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.629962 | orchestrator | 2025-08-29 17:22:54.629971 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-08-29 17:22:54.629981 | orchestrator | Friday 29 August 2025 17:22:25 +0000 (0:00:00.189) 0:03:19.459 ********* 2025-08-29 17:22:54.629990 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.630000 | orchestrator | 2025-08-29 17:22:54.630010 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-08-29 17:22:54.630042 | orchestrator | Friday 29 August 2025 17:22:25 +0000 (0:00:00.187) 0:03:19.647 ********* 2025-08-29 17:22:54.630054 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.630064 | orchestrator | 2025-08-29 17:22:54.630073 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-08-29 17:22:54.630083 | orchestrator | Friday 29 August 2025 17:22:26 +0000 (0:00:00.198) 0:03:19.845 ********* 2025-08-29 17:22:54.630102 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.630112 | orchestrator | 2025-08-29 17:22:54.630122 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-08-29 17:22:54.630131 | orchestrator | Friday 29 August 2025 17:22:26 +0000 (0:00:00.206) 0:03:20.052 ********* 2025-08-29 17:22:54.630141 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.630150 | orchestrator | 2025-08-29 17:22:54.630160 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-08-29 17:22:54.630175 | orchestrator | Friday 29 August 2025 17:22:26 +0000 (0:00:00.219) 0:03:20.272 ********* 2025-08-29 17:22:54.630185 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.630195 | orchestrator | 2025-08-29 17:22:54.630205 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-08-29 17:22:54.630214 | orchestrator | Friday 29 August 2025 17:22:26 +0000 (0:00:00.192) 0:03:20.464 ********* 2025-08-29 17:22:54.630224 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.630233 | orchestrator | 2025-08-29 17:22:54.630243 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-08-29 17:22:54.630252 | orchestrator | Friday 29 August 2025 17:22:26 +0000 (0:00:00.207) 0:03:20.672 ********* 2025-08-29 17:22:54.630262 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.630271 | orchestrator | 2025-08-29 17:22:54.630330 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-08-29 17:22:54.630340 | orchestrator | Friday 29 August 2025 17:22:27 +0000 (0:00:00.210) 0:03:20.883 ********* 2025-08-29 17:22:54.630350 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-08-29 17:22:54.630360 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-08-29 17:22:54.630369 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-08-29 17:22:54.630379 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-08-29 17:22:54.630388 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.630398 | orchestrator | 2025-08-29 17:22:54.630407 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-08-29 17:22:54.630416 | orchestrator | Friday 29 August 2025 17:22:27 +0000 (0:00:00.838) 0:03:21.721 ********* 2025-08-29 17:22:54.630424 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.630432 | orchestrator | 2025-08-29 17:22:54.630439 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-08-29 17:22:54.630447 | orchestrator | Friday 29 August 2025 17:22:28 +0000 (0:00:00.185) 0:03:21.907 ********* 2025-08-29 17:22:54.630455 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.630463 | orchestrator | 2025-08-29 17:22:54.630470 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-08-29 17:22:54.630478 | orchestrator | Friday 29 August 2025 17:22:28 +0000 (0:00:00.198) 0:03:22.106 ********* 2025-08-29 17:22:54.630486 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.630494 | orchestrator | 2025-08-29 17:22:54.630502 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-08-29 17:22:54.630509 | orchestrator | Friday 29 August 2025 17:22:28 +0000 (0:00:00.211) 0:03:22.318 ********* 2025-08-29 17:22:54.630517 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.630525 | orchestrator | 2025-08-29 17:22:54.630533 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-08-29 17:22:54.630541 | orchestrator | Friday 29 August 2025 17:22:28 +0000 (0:00:00.209) 0:03:22.527 ********* 2025-08-29 17:22:54.630549 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-08-29 17:22:54.630557 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-08-29 17:22:54.630564 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.630572 | orchestrator | 2025-08-29 17:22:54.630580 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-08-29 17:22:54.630588 | orchestrator | Friday 29 August 2025 17:22:29 +0000 (0:00:00.424) 0:03:22.952 ********* 2025-08-29 17:22:54.630600 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.630608 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.630616 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.630624 | orchestrator | 2025-08-29 17:22:54.630632 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-08-29 17:22:54.630639 | orchestrator | Friday 29 August 2025 17:22:29 +0000 (0:00:00.352) 0:03:23.304 ********* 2025-08-29 17:22:54.630647 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.630655 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.630663 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.630671 | orchestrator | 2025-08-29 17:22:54.630678 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-08-29 17:22:54.630686 | orchestrator | 2025-08-29 17:22:54.630694 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-08-29 17:22:54.630702 | orchestrator | Friday 29 August 2025 17:22:30 +0000 (0:00:01.105) 0:03:24.410 ********* 2025-08-29 17:22:54.630710 | orchestrator | ok: [testbed-manager] 2025-08-29 17:22:54.630718 | orchestrator | 2025-08-29 17:22:54.630725 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-08-29 17:22:54.630733 | orchestrator | Friday 29 August 2025 17:22:30 +0000 (0:00:00.134) 0:03:24.544 ********* 2025-08-29 17:22:54.630741 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 17:22:54.630749 | orchestrator | 2025-08-29 17:22:54.630756 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-08-29 17:22:54.630764 | orchestrator | Friday 29 August 2025 17:22:31 +0000 (0:00:00.206) 0:03:24.750 ********* 2025-08-29 17:22:54.630772 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:54.630779 | orchestrator | 2025-08-29 17:22:54.630787 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-08-29 17:22:54.630795 | orchestrator | 2025-08-29 17:22:54.630802 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-08-29 17:22:54.630810 | orchestrator | Friday 29 August 2025 17:22:37 +0000 (0:00:06.567) 0:03:31.318 ********* 2025-08-29 17:22:54.630818 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:22:54.630826 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:22:54.630834 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:22:54.630841 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:22:54.630849 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:22:54.630857 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:22:54.630865 | orchestrator | 2025-08-29 17:22:54.630872 | orchestrator | TASK [Manage labels] *********************************************************** 2025-08-29 17:22:54.630880 | orchestrator | Friday 29 August 2025 17:22:38 +0000 (0:00:00.996) 0:03:32.314 ********* 2025-08-29 17:22:54.630892 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 17:22:54.630901 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 17:22:54.630909 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 17:22:54.630917 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 17:22:54.630924 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 17:22:54.630932 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 17:22:54.630940 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 17:22:54.630948 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 17:22:54.630956 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 17:22:54.630964 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 17:22:54.630977 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 17:22:54.630984 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 17:22:54.630992 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 17:22:54.631005 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 17:22:54.631013 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 17:22:54.631021 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 17:22:54.631029 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 17:22:54.631036 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 17:22:54.631044 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 17:22:54.631052 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 17:22:54.631060 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 17:22:54.631067 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 17:22:54.631075 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 17:22:54.631083 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 17:22:54.631091 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 17:22:54.631099 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 17:22:54.631107 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 17:22:54.631114 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 17:22:54.631122 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 17:22:54.631130 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 17:22:54.631137 | orchestrator | 2025-08-29 17:22:54.631145 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-08-29 17:22:54.631153 | orchestrator | Friday 29 August 2025 17:22:52 +0000 (0:00:13.824) 0:03:46.139 ********* 2025-08-29 17:22:54.631161 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:54.631169 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:54.631176 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:54.631184 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.631192 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.631200 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.631208 | orchestrator | 2025-08-29 17:22:54.631215 | orchestrator | TASK [Manage taints] *********************************************************** 2025-08-29 17:22:54.631223 | orchestrator | Friday 29 August 2025 17:22:53 +0000 (0:00:01.248) 0:03:47.387 ********* 2025-08-29 17:22:54.631231 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:22:54.631239 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:22:54.631246 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:22:54.631254 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:22:54.631262 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:22:54.631270 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:22:54.631290 | orchestrator | 2025-08-29 17:22:54.631298 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:22:54.631306 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:22:54.631315 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-08-29 17:22:54.631328 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 17:22:54.631341 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 17:22:54.631349 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 17:22:54.631357 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 17:22:54.631368 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 17:22:54.631376 | orchestrator | 2025-08-29 17:22:54.631384 | orchestrator | 2025-08-29 17:22:54.631392 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:22:54.631400 | orchestrator | Friday 29 August 2025 17:22:53 +0000 (0:00:00.356) 0:03:47.744 ********* 2025-08-29 17:22:54.631408 | orchestrator | =============================================================================== 2025-08-29 17:22:54.631415 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.85s 2025-08-29 17:22:54.631423 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.23s 2025-08-29 17:22:54.631431 | orchestrator | kubectl : Install required packages ------------------------------------ 14.84s 2025-08-29 17:22:54.631439 | orchestrator | Manage labels ---------------------------------------------------------- 13.82s 2025-08-29 17:22:54.631446 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 13.59s 2025-08-29 17:22:54.631454 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.52s 2025-08-29 17:22:54.631462 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.57s 2025-08-29 17:22:54.631470 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.87s 2025-08-29 17:22:54.631477 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 3.92s 2025-08-29 17:22:54.631485 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.32s 2025-08-29 17:22:54.631493 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.30s 2025-08-29 17:22:54.631501 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.57s 2025-08-29 17:22:54.631509 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.40s 2025-08-29 17:22:54.631516 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.32s 2025-08-29 17:22:54.631524 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.28s 2025-08-29 17:22:54.631532 | orchestrator | k3s_server : Clean previous runs of k3s-init ---------------------------- 2.24s 2025-08-29 17:22:54.631540 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.14s 2025-08-29 17:22:54.631547 | orchestrator | k3s_server : Create /etc/rancher/k3s directory -------------------------- 2.03s 2025-08-29 17:22:54.631555 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.88s 2025-08-29 17:22:54.631563 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.71s 2025-08-29 17:22:54.631571 | orchestrator | 2025-08-29 17:22:54 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:54.631578 | orchestrator | 2025-08-29 17:22:54 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:54.631586 | orchestrator | 2025-08-29 17:22:54 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:22:54.631598 | orchestrator | 2025-08-29 17:22:54 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:54.631606 | orchestrator | 2025-08-29 17:22:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:22:57.652146 | orchestrator | 2025-08-29 17:22:57 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:22:57.652567 | orchestrator | 2025-08-29 17:22:57 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:22:57.653377 | orchestrator | 2025-08-29 17:22:57 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:22:57.653884 | orchestrator | 2025-08-29 17:22:57 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:22:57.654698 | orchestrator | 2025-08-29 17:22:57 | INFO  | Task 090cc7ef-5dfc-40cd-baec-68ffd18ae40c is in state STARTED 2025-08-29 17:22:57.655487 | orchestrator | 2025-08-29 17:22:57 | INFO  | Task 00c5a6c4-bd01-4fe0-afe6-fc7357303dfa is in state STARTED 2025-08-29 17:22:57.655506 | orchestrator | 2025-08-29 17:22:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:00.703872 | orchestrator | 2025-08-29 17:23:00 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:00.703981 | orchestrator | 2025-08-29 17:23:00 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:00.704191 | orchestrator | 2025-08-29 17:23:00 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:00.704865 | orchestrator | 2025-08-29 17:23:00 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:23:00.705629 | orchestrator | 2025-08-29 17:23:00 | INFO  | Task 090cc7ef-5dfc-40cd-baec-68ffd18ae40c is in state STARTED 2025-08-29 17:23:00.708094 | orchestrator | 2025-08-29 17:23:00 | INFO  | Task 00c5a6c4-bd01-4fe0-afe6-fc7357303dfa is in state STARTED 2025-08-29 17:23:00.708129 | orchestrator | 2025-08-29 17:23:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:03.795734 | orchestrator | 2025-08-29 17:23:03 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:03.796907 | orchestrator | 2025-08-29 17:23:03 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:03.798599 | orchestrator | 2025-08-29 17:23:03 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:03.799244 | orchestrator | 2025-08-29 17:23:03 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:23:03.799851 | orchestrator | 2025-08-29 17:23:03 | INFO  | Task 090cc7ef-5dfc-40cd-baec-68ffd18ae40c is in state STARTED 2025-08-29 17:23:03.800489 | orchestrator | 2025-08-29 17:23:03 | INFO  | Task 00c5a6c4-bd01-4fe0-afe6-fc7357303dfa is in state SUCCESS 2025-08-29 17:23:03.800515 | orchestrator | 2025-08-29 17:23:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:06.854659 | orchestrator | 2025-08-29 17:23:06 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:06.854764 | orchestrator | 2025-08-29 17:23:06 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:06.854779 | orchestrator | 2025-08-29 17:23:06 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:06.854790 | orchestrator | 2025-08-29 17:23:06 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:23:06.854802 | orchestrator | 2025-08-29 17:23:06 | INFO  | Task 090cc7ef-5dfc-40cd-baec-68ffd18ae40c is in state STARTED 2025-08-29 17:23:06.854839 | orchestrator | 2025-08-29 17:23:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:09.891086 | orchestrator | 2025-08-29 17:23:09 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:09.892684 | orchestrator | 2025-08-29 17:23:09 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:09.894527 | orchestrator | 2025-08-29 17:23:09 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:09.896791 | orchestrator | 2025-08-29 17:23:09 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:23:09.898000 | orchestrator | 2025-08-29 17:23:09 | INFO  | Task 090cc7ef-5dfc-40cd-baec-68ffd18ae40c is in state SUCCESS 2025-08-29 17:23:09.898069 | orchestrator | 2025-08-29 17:23:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:12.939041 | orchestrator | 2025-08-29 17:23:12 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:12.939680 | orchestrator | 2025-08-29 17:23:12 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:12.940939 | orchestrator | 2025-08-29 17:23:12 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:12.945025 | orchestrator | 2025-08-29 17:23:12 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:23:12.945054 | orchestrator | 2025-08-29 17:23:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:15.992060 | orchestrator | 2025-08-29 17:23:15 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:15.992165 | orchestrator | 2025-08-29 17:23:15 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:15.992490 | orchestrator | 2025-08-29 17:23:15 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:15.993532 | orchestrator | 2025-08-29 17:23:15 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:23:15.993558 | orchestrator | 2025-08-29 17:23:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:19.026329 | orchestrator | 2025-08-29 17:23:19 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:19.027391 | orchestrator | 2025-08-29 17:23:19 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:19.028679 | orchestrator | 2025-08-29 17:23:19 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:19.029601 | orchestrator | 2025-08-29 17:23:19 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:23:19.029738 | orchestrator | 2025-08-29 17:23:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:22.070414 | orchestrator | 2025-08-29 17:23:22 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:22.072176 | orchestrator | 2025-08-29 17:23:22 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:22.073911 | orchestrator | 2025-08-29 17:23:22 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:22.075803 | orchestrator | 2025-08-29 17:23:22 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state STARTED 2025-08-29 17:23:22.075832 | orchestrator | 2025-08-29 17:23:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:25.122374 | orchestrator | 2025-08-29 17:23:25 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:23:25.124715 | orchestrator | 2025-08-29 17:23:25 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:25.126118 | orchestrator | 2025-08-29 17:23:25 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:25.128160 | orchestrator | 2025-08-29 17:23:25 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:25.130674 | orchestrator | 2025-08-29 17:23:25 | INFO  | Task 499f0fe0-bcb8-47db-bcfe-697ff6ec83e7 is in state SUCCESS 2025-08-29 17:23:25.133274 | orchestrator | 2025-08-29 17:23:25.133349 | orchestrator | 2025-08-29 17:23:25.133361 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-08-29 17:23:25.133373 | orchestrator | 2025-08-29 17:23:25.133384 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 17:23:25.133395 | orchestrator | Friday 29 August 2025 17:22:58 +0000 (0:00:00.273) 0:00:00.273 ********* 2025-08-29 17:23:25.133407 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 17:23:25.133418 | orchestrator | 2025-08-29 17:23:25.133429 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 17:23:25.133440 | orchestrator | Friday 29 August 2025 17:22:59 +0000 (0:00:01.035) 0:00:01.308 ********* 2025-08-29 17:23:25.133451 | orchestrator | changed: [testbed-manager] 2025-08-29 17:23:25.133462 | orchestrator | 2025-08-29 17:23:25.133472 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-08-29 17:23:25.133483 | orchestrator | Friday 29 August 2025 17:23:00 +0000 (0:00:01.159) 0:00:02.468 ********* 2025-08-29 17:23:25.133494 | orchestrator | changed: [testbed-manager] 2025-08-29 17:23:25.133505 | orchestrator | 2025-08-29 17:23:25.133516 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:23:25.133527 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:23:25.133539 | orchestrator | 2025-08-29 17:23:25.133550 | orchestrator | 2025-08-29 17:23:25.133561 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:23:25.133572 | orchestrator | Friday 29 August 2025 17:23:01 +0000 (0:00:00.486) 0:00:02.954 ********* 2025-08-29 17:23:25.133582 | orchestrator | =============================================================================== 2025-08-29 17:23:25.133593 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.16s 2025-08-29 17:23:25.133604 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.04s 2025-08-29 17:23:25.133615 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.49s 2025-08-29 17:23:25.133625 | orchestrator | 2025-08-29 17:23:25.133636 | orchestrator | 2025-08-29 17:23:25.133647 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-08-29 17:23:25.133658 | orchestrator | 2025-08-29 17:23:25.133668 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-08-29 17:23:25.133679 | orchestrator | Friday 29 August 2025 17:22:59 +0000 (0:00:00.396) 0:00:00.396 ********* 2025-08-29 17:23:25.133689 | orchestrator | ok: [testbed-manager] 2025-08-29 17:23:25.133701 | orchestrator | 2025-08-29 17:23:25.133711 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-08-29 17:23:25.133722 | orchestrator | Friday 29 August 2025 17:23:00 +0000 (0:00:00.753) 0:00:01.149 ********* 2025-08-29 17:23:25.133732 | orchestrator | ok: [testbed-manager] 2025-08-29 17:23:25.133743 | orchestrator | 2025-08-29 17:23:25.133754 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 17:23:25.133764 | orchestrator | Friday 29 August 2025 17:23:01 +0000 (0:00:00.609) 0:00:01.758 ********* 2025-08-29 17:23:25.133775 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 17:23:25.133786 | orchestrator | 2025-08-29 17:23:25.133797 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 17:23:25.133807 | orchestrator | Friday 29 August 2025 17:23:01 +0000 (0:00:00.707) 0:00:02.465 ********* 2025-08-29 17:23:25.133834 | orchestrator | changed: [testbed-manager] 2025-08-29 17:23:25.133845 | orchestrator | 2025-08-29 17:23:25.133856 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-08-29 17:23:25.133867 | orchestrator | Friday 29 August 2025 17:23:02 +0000 (0:00:01.242) 0:00:03.708 ********* 2025-08-29 17:23:25.133877 | orchestrator | changed: [testbed-manager] 2025-08-29 17:23:25.133888 | orchestrator | 2025-08-29 17:23:25.133898 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-08-29 17:23:25.133909 | orchestrator | Friday 29 August 2025 17:23:03 +0000 (0:00:00.806) 0:00:04.514 ********* 2025-08-29 17:23:25.133919 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 17:23:25.133930 | orchestrator | 2025-08-29 17:23:25.133941 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-08-29 17:23:25.133951 | orchestrator | Friday 29 August 2025 17:23:05 +0000 (0:00:01.797) 0:00:06.312 ********* 2025-08-29 17:23:25.133969 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 17:23:25.133980 | orchestrator | 2025-08-29 17:23:25.133991 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-08-29 17:23:25.134002 | orchestrator | Friday 29 August 2025 17:23:06 +0000 (0:00:01.027) 0:00:07.340 ********* 2025-08-29 17:23:25.134013 | orchestrator | ok: [testbed-manager] 2025-08-29 17:23:25.134099 | orchestrator | 2025-08-29 17:23:25.134111 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-08-29 17:23:25.134121 | orchestrator | Friday 29 August 2025 17:23:07 +0000 (0:00:00.451) 0:00:07.791 ********* 2025-08-29 17:23:25.134132 | orchestrator | ok: [testbed-manager] 2025-08-29 17:23:25.134143 | orchestrator | 2025-08-29 17:23:25.134154 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:23:25.134165 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:23:25.134175 | orchestrator | 2025-08-29 17:23:25.134186 | orchestrator | 2025-08-29 17:23:25.134197 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:23:25.134208 | orchestrator | Friday 29 August 2025 17:23:07 +0000 (0:00:00.402) 0:00:08.193 ********* 2025-08-29 17:23:25.134218 | orchestrator | =============================================================================== 2025-08-29 17:23:25.134229 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.80s 2025-08-29 17:23:25.134239 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.24s 2025-08-29 17:23:25.134250 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.03s 2025-08-29 17:23:25.134274 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.81s 2025-08-29 17:23:25.134318 | orchestrator | Get home directory of operator user ------------------------------------- 0.75s 2025-08-29 17:23:25.134329 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.71s 2025-08-29 17:23:25.134340 | orchestrator | Create .kube directory -------------------------------------------------- 0.61s 2025-08-29 17:23:25.134350 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.45s 2025-08-29 17:23:25.134361 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.40s 2025-08-29 17:23:25.134372 | orchestrator | 2025-08-29 17:23:25.134382 | orchestrator | 2025-08-29 17:23:25.134393 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:23:25.134404 | orchestrator | 2025-08-29 17:23:25.134414 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:23:25.134425 | orchestrator | Friday 29 August 2025 17:22:13 +0000 (0:00:00.617) 0:00:00.617 ********* 2025-08-29 17:23:25.134436 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:23:25.134446 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:23:25.134457 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:23:25.134467 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:23:25.134478 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:23:25.134496 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:23:25.134507 | orchestrator | 2025-08-29 17:23:25.134518 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:23:25.134529 | orchestrator | Friday 29 August 2025 17:22:14 +0000 (0:00:01.265) 0:00:01.882 ********* 2025-08-29 17:23:25.134540 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 17:23:25.134551 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 17:23:25.134562 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 17:23:25.134572 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 17:23:25.134583 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 17:23:25.134594 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 17:23:25.134604 | orchestrator | 2025-08-29 17:23:25.134615 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-08-29 17:23:25.134626 | orchestrator | 2025-08-29 17:23:25.134636 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-08-29 17:23:25.134647 | orchestrator | Friday 29 August 2025 17:22:16 +0000 (0:00:01.419) 0:00:03.302 ********* 2025-08-29 17:23:25.134659 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:23:25.134671 | orchestrator | 2025-08-29 17:23:25.134682 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 17:23:25.134693 | orchestrator | Friday 29 August 2025 17:22:18 +0000 (0:00:01.962) 0:00:05.267 ********* 2025-08-29 17:23:25.134703 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-08-29 17:23:25.134715 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-08-29 17:23:25.134725 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-08-29 17:23:25.134736 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-08-29 17:23:25.134747 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-08-29 17:23:25.134757 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-08-29 17:23:25.134768 | orchestrator | 2025-08-29 17:23:25.134778 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 17:23:25.134789 | orchestrator | Friday 29 August 2025 17:22:20 +0000 (0:00:02.072) 0:00:07.339 ********* 2025-08-29 17:23:25.134800 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-08-29 17:23:25.134811 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-08-29 17:23:25.134821 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-08-29 17:23:25.134832 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-08-29 17:23:25.134848 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-08-29 17:23:25.134859 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-08-29 17:23:25.134870 | orchestrator | 2025-08-29 17:23:25.134881 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 17:23:25.134892 | orchestrator | Friday 29 August 2025 17:22:22 +0000 (0:00:02.679) 0:00:10.018 ********* 2025-08-29 17:23:25.134902 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-08-29 17:23:25.134913 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:23:25.134924 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-08-29 17:23:25.134934 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:23:25.134945 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-08-29 17:23:25.134955 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:23:25.134966 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-08-29 17:23:25.134976 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:23:25.134987 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-08-29 17:23:25.135004 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:23:25.135015 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-08-29 17:23:25.135025 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:23:25.135036 | orchestrator | 2025-08-29 17:23:25.135047 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-08-29 17:23:25.135057 | orchestrator | Friday 29 August 2025 17:22:24 +0000 (0:00:02.038) 0:00:12.057 ********* 2025-08-29 17:23:25.135068 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:23:25.135079 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:23:25.135090 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:23:25.135106 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:23:25.135117 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:23:25.135127 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:23:25.135138 | orchestrator | 2025-08-29 17:23:25.135149 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-08-29 17:23:25.135159 | orchestrator | Friday 29 August 2025 17:22:25 +0000 (0:00:00.988) 0:00:13.046 ********* 2025-08-29 17:23:25.135174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135218 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135267 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135387 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135467 | orchestrator | 2025-08-29 17:23:25.135478 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-08-29 17:23:25.135489 | orchestrator | Friday 29 August 2025 17:22:27 +0000 (0:00:02.089) 0:00:15.135 ********* 2025-08-29 17:23:25.135501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135512 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135524 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135599 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135671 | orchestrator | 2025-08-29 17:23:25.135682 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-08-29 17:23:25.135693 | orchestrator | Friday 29 August 2025 17:22:31 +0000 (0:00:03.263) 0:00:18.399 ********* 2025-08-29 17:23:25.135704 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:23:25.135715 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:23:25.135726 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:23:25.135737 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:23:25.135747 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:23:25.135758 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:23:25.135768 | orchestrator | 2025-08-29 17:23:25.135779 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-08-29 17:23:25.135790 | orchestrator | Friday 29 August 2025 17:22:32 +0000 (0:00:01.668) 0:00:20.068 ********* 2025-08-29 17:23:25.135801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135841 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135893 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135926 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:23:25.135984 | orchestrator | 2025-08-29 17:23:25.135995 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 17:23:25.136006 | orchestrator | Friday 29 August 2025 17:22:35 +0000 (0:00:02.976) 0:00:23.045 ********* 2025-08-29 17:23:25.136017 | orchestrator | 2025-08-29 17:23:25.136028 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 17:23:25.136038 | orchestrator | Friday 29 August 2025 17:22:36 +0000 (0:00:00.513) 0:00:23.559 ********* 2025-08-29 17:23:25.136049 | orchestrator | 2025-08-29 17:23:25.136060 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 17:23:25.136071 | orchestrator | Friday 29 August 2025 17:22:36 +0000 (0:00:00.125) 0:00:23.684 ********* 2025-08-29 17:23:25.136081 | orchestrator | 2025-08-29 17:23:25.136091 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 17:23:25.136102 | orchestrator | Friday 29 August 2025 17:22:36 +0000 (0:00:00.158) 0:00:23.843 ********* 2025-08-29 17:23:25.136113 | orchestrator | 2025-08-29 17:23:25.136124 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 17:23:25.136134 | orchestrator | Friday 29 August 2025 17:22:36 +0000 (0:00:00.136) 0:00:23.980 ********* 2025-08-29 17:23:25.136145 | orchestrator | 2025-08-29 17:23:25.136156 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 17:23:25.136167 | orchestrator | Friday 29 August 2025 17:22:36 +0000 (0:00:00.182) 0:00:24.162 ********* 2025-08-29 17:23:25.136177 | orchestrator | 2025-08-29 17:23:25.136188 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-08-29 17:23:25.136199 | orchestrator | Friday 29 August 2025 17:22:37 +0000 (0:00:00.217) 0:00:24.380 ********* 2025-08-29 17:23:25.136209 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:23:25.136220 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:23:25.136231 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:23:25.136241 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:23:25.136252 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:23:25.136263 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:23:25.136273 | orchestrator | 2025-08-29 17:23:25.136303 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-08-29 17:23:25.136314 | orchestrator | Friday 29 August 2025 17:22:48 +0000 (0:00:11.104) 0:00:35.484 ********* 2025-08-29 17:23:25.136325 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:23:25.136335 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:23:25.136346 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:23:25.136357 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:23:25.136367 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:23:25.136378 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:23:25.136388 | orchestrator | 2025-08-29 17:23:25.136399 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-08-29 17:23:25.136410 | orchestrator | Friday 29 August 2025 17:22:50 +0000 (0:00:01.774) 0:00:37.259 ********* 2025-08-29 17:23:25.137051 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:23:25.137067 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:23:25.137078 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:23:25.137089 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:23:25.137099 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:23:25.137110 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:23:25.137121 | orchestrator | 2025-08-29 17:23:25.137131 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-08-29 17:23:25.137142 | orchestrator | Friday 29 August 2025 17:22:59 +0000 (0:00:09.312) 0:00:46.572 ********* 2025-08-29 17:23:25.137153 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-08-29 17:23:25.137172 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-08-29 17:23:25.137183 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-08-29 17:23:25.137202 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-08-29 17:23:25.137213 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-08-29 17:23:25.137224 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-08-29 17:23:25.137240 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-08-29 17:23:25.137251 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-08-29 17:23:25.137261 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-08-29 17:23:25.137272 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-08-29 17:23:25.137304 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-08-29 17:23:25.137315 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-08-29 17:23:25.137326 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 17:23:25.137336 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 17:23:25.137347 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 17:23:25.137357 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 17:23:25.137368 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 17:23:25.137378 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 17:23:25.137389 | orchestrator | 2025-08-29 17:23:25.137399 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-08-29 17:23:25.137410 | orchestrator | Friday 29 August 2025 17:23:06 +0000 (0:00:07.522) 0:00:54.094 ********* 2025-08-29 17:23:25.137421 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-08-29 17:23:25.137432 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:23:25.137442 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-08-29 17:23:25.137453 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:23:25.137463 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-08-29 17:23:25.137474 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:23:25.137484 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-08-29 17:23:25.137495 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-08-29 17:23:25.137506 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-08-29 17:23:25.137516 | orchestrator | 2025-08-29 17:23:25.137527 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-08-29 17:23:25.137537 | orchestrator | Friday 29 August 2025 17:23:10 +0000 (0:00:03.453) 0:00:57.547 ********* 2025-08-29 17:23:25.137548 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-08-29 17:23:25.137558 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:23:25.137569 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-08-29 17:23:25.137579 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:23:25.137590 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-08-29 17:23:25.137600 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:23:25.137611 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-08-29 17:23:25.137621 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-08-29 17:23:25.137638 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-08-29 17:23:25.137649 | orchestrator | 2025-08-29 17:23:25.137660 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-08-29 17:23:25.137670 | orchestrator | Friday 29 August 2025 17:23:14 +0000 (0:00:04.054) 0:01:01.602 ********* 2025-08-29 17:23:25.137681 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:23:25.137691 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:23:25.137702 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:23:25.137712 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:23:25.137722 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:23:25.137733 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:23:25.137743 | orchestrator | 2025-08-29 17:23:25.137754 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:23:25.137765 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 17:23:25.137777 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 17:23:25.137794 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 17:23:25.137805 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 17:23:25.137816 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 17:23:25.137827 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 17:23:25.137838 | orchestrator | 2025-08-29 17:23:25.137848 | orchestrator | 2025-08-29 17:23:25.137864 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:23:25.137875 | orchestrator | Friday 29 August 2025 17:23:23 +0000 (0:00:08.707) 0:01:10.309 ********* 2025-08-29 17:23:25.137885 | orchestrator | =============================================================================== 2025-08-29 17:23:25.137896 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.02s 2025-08-29 17:23:25.137906 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.11s 2025-08-29 17:23:25.137917 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.52s 2025-08-29 17:23:25.137928 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.05s 2025-08-29 17:23:25.137938 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.45s 2025-08-29 17:23:25.137949 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.26s 2025-08-29 17:23:25.137959 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.98s 2025-08-29 17:23:25.137970 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.68s 2025-08-29 17:23:25.137980 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.09s 2025-08-29 17:23:25.137990 | orchestrator | module-load : Load modules ---------------------------------------------- 2.07s 2025-08-29 17:23:25.138001 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.04s 2025-08-29 17:23:25.138012 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.96s 2025-08-29 17:23:25.138058 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.77s 2025-08-29 17:23:25.138070 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.67s 2025-08-29 17:23:25.138081 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.42s 2025-08-29 17:23:25.138091 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.34s 2025-08-29 17:23:25.138108 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.27s 2025-08-29 17:23:25.138119 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.99s 2025-08-29 17:23:25.138130 | orchestrator | 2025-08-29 17:23:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:28.159815 | orchestrator | 2025-08-29 17:23:28 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:23:28.159914 | orchestrator | 2025-08-29 17:23:28 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:28.160768 | orchestrator | 2025-08-29 17:23:28 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:28.161850 | orchestrator | 2025-08-29 17:23:28 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:28.161872 | orchestrator | 2025-08-29 17:23:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:31.222669 | orchestrator | 2025-08-29 17:23:31 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:23:31.227140 | orchestrator | 2025-08-29 17:23:31 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:31.228251 | orchestrator | 2025-08-29 17:23:31 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:31.228964 | orchestrator | 2025-08-29 17:23:31 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:31.228990 | orchestrator | 2025-08-29 17:23:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:34.259693 | orchestrator | 2025-08-29 17:23:34 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:23:34.261522 | orchestrator | 2025-08-29 17:23:34 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:34.262511 | orchestrator | 2025-08-29 17:23:34 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:34.264517 | orchestrator | 2025-08-29 17:23:34 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:34.264539 | orchestrator | 2025-08-29 17:23:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:37.292881 | orchestrator | 2025-08-29 17:23:37 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:23:37.293365 | orchestrator | 2025-08-29 17:23:37 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:37.294199 | orchestrator | 2025-08-29 17:23:37 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:37.295088 | orchestrator | 2025-08-29 17:23:37 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:37.295110 | orchestrator | 2025-08-29 17:23:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:40.342558 | orchestrator | 2025-08-29 17:23:40 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:23:40.342646 | orchestrator | 2025-08-29 17:23:40 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:40.342657 | orchestrator | 2025-08-29 17:23:40 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:40.342666 | orchestrator | 2025-08-29 17:23:40 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:40.342674 | orchestrator | 2025-08-29 17:23:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:43.388038 | orchestrator | 2025-08-29 17:23:43 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:23:43.388173 | orchestrator | 2025-08-29 17:23:43 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:43.388189 | orchestrator | 2025-08-29 17:23:43 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:43.388202 | orchestrator | 2025-08-29 17:23:43 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:43.388213 | orchestrator | 2025-08-29 17:23:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:46.422340 | orchestrator | 2025-08-29 17:23:46 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:23:46.424189 | orchestrator | 2025-08-29 17:23:46 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:46.425958 | orchestrator | 2025-08-29 17:23:46 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:46.427655 | orchestrator | 2025-08-29 17:23:46 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:46.427949 | orchestrator | 2025-08-29 17:23:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:49.471831 | orchestrator | 2025-08-29 17:23:49 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:23:49.472104 | orchestrator | 2025-08-29 17:23:49 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:49.473158 | orchestrator | 2025-08-29 17:23:49 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:49.475052 | orchestrator | 2025-08-29 17:23:49 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:49.475078 | orchestrator | 2025-08-29 17:23:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:52.512804 | orchestrator | 2025-08-29 17:23:52 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:23:52.512937 | orchestrator | 2025-08-29 17:23:52 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:52.513182 | orchestrator | 2025-08-29 17:23:52 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:52.514248 | orchestrator | 2025-08-29 17:23:52 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:52.514275 | orchestrator | 2025-08-29 17:23:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:55.618347 | orchestrator | 2025-08-29 17:23:55 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:23:55.619380 | orchestrator | 2025-08-29 17:23:55 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:55.621056 | orchestrator | 2025-08-29 17:23:55 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:55.623229 | orchestrator | 2025-08-29 17:23:55 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:55.623263 | orchestrator | 2025-08-29 17:23:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:23:58.663413 | orchestrator | 2025-08-29 17:23:58 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:23:58.664922 | orchestrator | 2025-08-29 17:23:58 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:23:58.666094 | orchestrator | 2025-08-29 17:23:58 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:23:58.669592 | orchestrator | 2025-08-29 17:23:58 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:23:58.669694 | orchestrator | 2025-08-29 17:23:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:01.716969 | orchestrator | 2025-08-29 17:24:01 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:01.718200 | orchestrator | 2025-08-29 17:24:01 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:01.719500 | orchestrator | 2025-08-29 17:24:01 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:01.720891 | orchestrator | 2025-08-29 17:24:01 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:01.721073 | orchestrator | 2025-08-29 17:24:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:04.753165 | orchestrator | 2025-08-29 17:24:04 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:04.754966 | orchestrator | 2025-08-29 17:24:04 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:04.756241 | orchestrator | 2025-08-29 17:24:04 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:04.758344 | orchestrator | 2025-08-29 17:24:04 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:04.758371 | orchestrator | 2025-08-29 17:24:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:07.798970 | orchestrator | 2025-08-29 17:24:07 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:07.799887 | orchestrator | 2025-08-29 17:24:07 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:07.801192 | orchestrator | 2025-08-29 17:24:07 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:07.802648 | orchestrator | 2025-08-29 17:24:07 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:07.802675 | orchestrator | 2025-08-29 17:24:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:10.848888 | orchestrator | 2025-08-29 17:24:10 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:10.855044 | orchestrator | 2025-08-29 17:24:10 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:10.857825 | orchestrator | 2025-08-29 17:24:10 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:10.861155 | orchestrator | 2025-08-29 17:24:10 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:10.861943 | orchestrator | 2025-08-29 17:24:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:13.912166 | orchestrator | 2025-08-29 17:24:13 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:13.915451 | orchestrator | 2025-08-29 17:24:13 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:13.918739 | orchestrator | 2025-08-29 17:24:13 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:13.923139 | orchestrator | 2025-08-29 17:24:13 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:13.923177 | orchestrator | 2025-08-29 17:24:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:16.967497 | orchestrator | 2025-08-29 17:24:16 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:16.971188 | orchestrator | 2025-08-29 17:24:16 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:16.975594 | orchestrator | 2025-08-29 17:24:16 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:16.977747 | orchestrator | 2025-08-29 17:24:16 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:16.977780 | orchestrator | 2025-08-29 17:24:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:20.026543 | orchestrator | 2025-08-29 17:24:20 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:20.029019 | orchestrator | 2025-08-29 17:24:20 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:20.031667 | orchestrator | 2025-08-29 17:24:20 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:20.033197 | orchestrator | 2025-08-29 17:24:20 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:20.033708 | orchestrator | 2025-08-29 17:24:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:23.087502 | orchestrator | 2025-08-29 17:24:23 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:23.089408 | orchestrator | 2025-08-29 17:24:23 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:23.092029 | orchestrator | 2025-08-29 17:24:23 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:23.093019 | orchestrator | 2025-08-29 17:24:23 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:23.093060 | orchestrator | 2025-08-29 17:24:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:26.136504 | orchestrator | 2025-08-29 17:24:26 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:26.138620 | orchestrator | 2025-08-29 17:24:26 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:26.141024 | orchestrator | 2025-08-29 17:24:26 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:26.143553 | orchestrator | 2025-08-29 17:24:26 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:26.144066 | orchestrator | 2025-08-29 17:24:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:29.198438 | orchestrator | 2025-08-29 17:24:29 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:29.199827 | orchestrator | 2025-08-29 17:24:29 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:29.201847 | orchestrator | 2025-08-29 17:24:29 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:29.204478 | orchestrator | 2025-08-29 17:24:29 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:29.204501 | orchestrator | 2025-08-29 17:24:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:32.249705 | orchestrator | 2025-08-29 17:24:32 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:32.251756 | orchestrator | 2025-08-29 17:24:32 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:32.253705 | orchestrator | 2025-08-29 17:24:32 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:32.256009 | orchestrator | 2025-08-29 17:24:32 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:32.256351 | orchestrator | 2025-08-29 17:24:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:35.311463 | orchestrator | 2025-08-29 17:24:35 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:35.312561 | orchestrator | 2025-08-29 17:24:35 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:35.313866 | orchestrator | 2025-08-29 17:24:35 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:35.315308 | orchestrator | 2025-08-29 17:24:35 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:35.315343 | orchestrator | 2025-08-29 17:24:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:38.371809 | orchestrator | 2025-08-29 17:24:38 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:38.374963 | orchestrator | 2025-08-29 17:24:38 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:38.377348 | orchestrator | 2025-08-29 17:24:38 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:38.379024 | orchestrator | 2025-08-29 17:24:38 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:38.379108 | orchestrator | 2025-08-29 17:24:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:41.482900 | orchestrator | 2025-08-29 17:24:41 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:41.487281 | orchestrator | 2025-08-29 17:24:41 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:41.488109 | orchestrator | 2025-08-29 17:24:41 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:41.489079 | orchestrator | 2025-08-29 17:24:41 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:41.489400 | orchestrator | 2025-08-29 17:24:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:44.525215 | orchestrator | 2025-08-29 17:24:44 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:44.526453 | orchestrator | 2025-08-29 17:24:44 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:44.528368 | orchestrator | 2025-08-29 17:24:44 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:44.531208 | orchestrator | 2025-08-29 17:24:44 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:44.531254 | orchestrator | 2025-08-29 17:24:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:47.580546 | orchestrator | 2025-08-29 17:24:47 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:47.580653 | orchestrator | 2025-08-29 17:24:47 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:47.581500 | orchestrator | 2025-08-29 17:24:47 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:47.583846 | orchestrator | 2025-08-29 17:24:47 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:47.583954 | orchestrator | 2025-08-29 17:24:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:50.616770 | orchestrator | 2025-08-29 17:24:50 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:50.617019 | orchestrator | 2025-08-29 17:24:50 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:50.618128 | orchestrator | 2025-08-29 17:24:50 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:50.620069 | orchestrator | 2025-08-29 17:24:50 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:50.620125 | orchestrator | 2025-08-29 17:24:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:53.650388 | orchestrator | 2025-08-29 17:24:53 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:53.650496 | orchestrator | 2025-08-29 17:24:53 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:53.651100 | orchestrator | 2025-08-29 17:24:53 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:53.652730 | orchestrator | 2025-08-29 17:24:53 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state STARTED 2025-08-29 17:24:53.652787 | orchestrator | 2025-08-29 17:24:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:56.687841 | orchestrator | 2025-08-29 17:24:56 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:56.688740 | orchestrator | 2025-08-29 17:24:56 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:56.689788 | orchestrator | 2025-08-29 17:24:56 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:56.691435 | orchestrator | 2025-08-29 17:24:56 | INFO  | Task 4d8e8829-26fd-4132-a39d-6ef7d5e140a8 is in state SUCCESS 2025-08-29 17:24:56.692873 | orchestrator | 2025-08-29 17:24:56.692897 | orchestrator | 2025-08-29 17:24:56.692907 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-08-29 17:24:56.692916 | orchestrator | 2025-08-29 17:24:56.692926 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-08-29 17:24:56.692935 | orchestrator | Friday 29 August 2025 17:22:37 +0000 (0:00:00.190) 0:00:00.190 ********* 2025-08-29 17:24:56.692944 | orchestrator | ok: [localhost] => { 2025-08-29 17:24:56.692954 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-08-29 17:24:56.692963 | orchestrator | } 2025-08-29 17:24:56.692972 | orchestrator | 2025-08-29 17:24:56.692981 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-08-29 17:24:56.692990 | orchestrator | Friday 29 August 2025 17:22:37 +0000 (0:00:00.064) 0:00:00.254 ********* 2025-08-29 17:24:56.692999 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-08-29 17:24:56.693009 | orchestrator | ...ignoring 2025-08-29 17:24:56.693018 | orchestrator | 2025-08-29 17:24:56.693027 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-08-29 17:24:56.693036 | orchestrator | Friday 29 August 2025 17:22:41 +0000 (0:00:04.284) 0:00:04.539 ********* 2025-08-29 17:24:56.693044 | orchestrator | skipping: [localhost] 2025-08-29 17:24:56.693053 | orchestrator | 2025-08-29 17:24:56.693062 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-08-29 17:24:56.693070 | orchestrator | Friday 29 August 2025 17:22:41 +0000 (0:00:00.060) 0:00:04.599 ********* 2025-08-29 17:24:56.693079 | orchestrator | ok: [localhost] 2025-08-29 17:24:56.693088 | orchestrator | 2025-08-29 17:24:56.693096 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:24:56.693106 | orchestrator | 2025-08-29 17:24:56.693118 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:24:56.693129 | orchestrator | Friday 29 August 2025 17:22:41 +0000 (0:00:00.279) 0:00:04.879 ********* 2025-08-29 17:24:56.693139 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:24:56.693150 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:24:56.693161 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:24:56.693172 | orchestrator | 2025-08-29 17:24:56.693198 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:24:56.693209 | orchestrator | Friday 29 August 2025 17:22:42 +0000 (0:00:00.558) 0:00:05.437 ********* 2025-08-29 17:24:56.693220 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-08-29 17:24:56.693255 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-08-29 17:24:56.693266 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-08-29 17:24:56.693277 | orchestrator | 2025-08-29 17:24:56.693287 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-08-29 17:24:56.693331 | orchestrator | 2025-08-29 17:24:56.693342 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 17:24:56.693353 | orchestrator | Friday 29 August 2025 17:22:43 +0000 (0:00:00.719) 0:00:06.157 ********* 2025-08-29 17:24:56.693364 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:24:56.693375 | orchestrator | 2025-08-29 17:24:56.693386 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-08-29 17:24:56.693397 | orchestrator | Friday 29 August 2025 17:22:43 +0000 (0:00:00.671) 0:00:06.828 ********* 2025-08-29 17:24:56.693407 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:24:56.693418 | orchestrator | 2025-08-29 17:24:56.693429 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-08-29 17:24:56.693440 | orchestrator | Friday 29 August 2025 17:22:44 +0000 (0:00:00.971) 0:00:07.800 ********* 2025-08-29 17:24:56.693451 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:24:56.693462 | orchestrator | 2025-08-29 17:24:56.693473 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-08-29 17:24:56.693485 | orchestrator | Friday 29 August 2025 17:22:45 +0000 (0:00:00.433) 0:00:08.234 ********* 2025-08-29 17:24:56.693497 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:24:56.693509 | orchestrator | 2025-08-29 17:24:56.693522 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-08-29 17:24:56.693534 | orchestrator | Friday 29 August 2025 17:22:45 +0000 (0:00:00.404) 0:00:08.638 ********* 2025-08-29 17:24:56.693546 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:24:56.693558 | orchestrator | 2025-08-29 17:24:56.693569 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-08-29 17:24:56.693582 | orchestrator | Friday 29 August 2025 17:22:45 +0000 (0:00:00.390) 0:00:09.029 ********* 2025-08-29 17:24:56.693594 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:24:56.693606 | orchestrator | 2025-08-29 17:24:56.693618 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 17:24:56.693630 | orchestrator | Friday 29 August 2025 17:22:46 +0000 (0:00:00.578) 0:00:09.608 ********* 2025-08-29 17:24:56.693642 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:24:56.693654 | orchestrator | 2025-08-29 17:24:56.693666 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-08-29 17:24:56.693678 | orchestrator | Friday 29 August 2025 17:22:47 +0000 (0:00:01.148) 0:00:10.757 ********* 2025-08-29 17:24:56.693690 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:24:56.693702 | orchestrator | 2025-08-29 17:24:56.693714 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-08-29 17:24:56.693726 | orchestrator | Friday 29 August 2025 17:22:48 +0000 (0:00:01.106) 0:00:11.863 ********* 2025-08-29 17:24:56.693738 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:24:56.693750 | orchestrator | 2025-08-29 17:24:56.693762 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-08-29 17:24:56.693775 | orchestrator | Friday 29 August 2025 17:22:49 +0000 (0:00:00.923) 0:00:12.787 ********* 2025-08-29 17:24:56.693788 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:24:56.693799 | orchestrator | 2025-08-29 17:24:56.693820 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-08-29 17:24:56.693831 | orchestrator | Friday 29 August 2025 17:22:51 +0000 (0:00:01.761) 0:00:14.548 ********* 2025-08-29 17:24:56.693847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:24:56.693877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:24:56.693892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:24:56.693904 | orchestrator | 2025-08-29 17:24:56.693915 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-08-29 17:24:56.693926 | orchestrator | Friday 29 August 2025 17:22:53 +0000 (0:00:01.598) 0:00:16.146 ********* 2025-08-29 17:24:56.693948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:24:56.693967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:24:56.693984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:24:56.693996 | orchestrator | 2025-08-29 17:24:56.694007 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-08-29 17:24:56.694064 | orchestrator | Friday 29 August 2025 17:22:54 +0000 (0:00:01.693) 0:00:17.840 ********* 2025-08-29 17:24:56.694085 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 17:24:56.694105 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 17:24:56.694122 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 17:24:56.694133 | orchestrator | 2025-08-29 17:24:56.694144 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-08-29 17:24:56.694155 | orchestrator | Friday 29 August 2025 17:22:57 +0000 (0:00:02.640) 0:00:20.480 ********* 2025-08-29 17:24:56.694166 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 17:24:56.694177 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 17:24:56.694188 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 17:24:56.694198 | orchestrator | 2025-08-29 17:24:56.694209 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-08-29 17:24:56.694220 | orchestrator | Friday 29 August 2025 17:23:00 +0000 (0:00:03.258) 0:00:23.739 ********* 2025-08-29 17:24:56.694231 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 17:24:56.694250 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 17:24:56.694261 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 17:24:56.694272 | orchestrator | 2025-08-29 17:24:56.694282 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-08-29 17:24:56.694311 | orchestrator | Friday 29 August 2025 17:23:02 +0000 (0:00:01.466) 0:00:25.205 ********* 2025-08-29 17:24:56.694331 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 17:24:56.694342 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 17:24:56.694353 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 17:24:56.694364 | orchestrator | 2025-08-29 17:24:56.694375 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-08-29 17:24:56.694386 | orchestrator | Friday 29 August 2025 17:23:04 +0000 (0:00:02.349) 0:00:27.555 ********* 2025-08-29 17:24:56.694397 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 17:24:56.694408 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 17:24:56.694419 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 17:24:56.694429 | orchestrator | 2025-08-29 17:24:56.694440 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-08-29 17:24:56.694451 | orchestrator | Friday 29 August 2025 17:23:06 +0000 (0:00:01.896) 0:00:29.452 ********* 2025-08-29 17:24:56.694462 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 17:24:56.694473 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 17:24:56.694483 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 17:24:56.694494 | orchestrator | 2025-08-29 17:24:56.694505 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 17:24:56.694516 | orchestrator | Friday 29 August 2025 17:23:08 +0000 (0:00:02.241) 0:00:31.693 ********* 2025-08-29 17:24:56.694526 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:24:56.694537 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:24:56.694548 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:24:56.694559 | orchestrator | 2025-08-29 17:24:56.694575 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-08-29 17:24:56.694587 | orchestrator | Friday 29 August 2025 17:23:09 +0000 (0:00:00.496) 0:00:32.190 ********* 2025-08-29 17:24:56.694599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:24:56.694612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:24:56.694639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:24:56.694651 | orchestrator | 2025-08-29 17:24:56.694662 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-08-29 17:24:56.694673 | orchestrator | Friday 29 August 2025 17:23:10 +0000 (0:00:01.415) 0:00:33.605 ********* 2025-08-29 17:24:56.694684 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:24:56.694695 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:24:56.694705 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:24:56.694716 | orchestrator | 2025-08-29 17:24:56.694727 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-08-29 17:24:56.694738 | orchestrator | Friday 29 August 2025 17:23:11 +0000 (0:00:00.859) 0:00:34.465 ********* 2025-08-29 17:24:56.694748 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:24:56.694759 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:24:56.694770 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:24:56.694781 | orchestrator | 2025-08-29 17:24:56.694792 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-08-29 17:24:56.694802 | orchestrator | Friday 29 August 2025 17:23:18 +0000 (0:00:07.244) 0:00:41.709 ********* 2025-08-29 17:24:56.694813 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:24:56.694824 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:24:56.694839 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:24:56.694850 | orchestrator | 2025-08-29 17:24:56.694861 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 17:24:56.694872 | orchestrator | 2025-08-29 17:24:56.694883 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 17:24:56.694894 | orchestrator | Friday 29 August 2025 17:23:19 +0000 (0:00:00.566) 0:00:42.275 ********* 2025-08-29 17:24:56.694904 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:24:56.694915 | orchestrator | 2025-08-29 17:24:56.694926 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 17:24:56.694938 | orchestrator | Friday 29 August 2025 17:23:19 +0000 (0:00:00.563) 0:00:42.838 ********* 2025-08-29 17:24:56.694956 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:24:56.694967 | orchestrator | 2025-08-29 17:24:56.694978 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 17:24:56.694989 | orchestrator | Friday 29 August 2025 17:23:20 +0000 (0:00:00.245) 0:00:43.083 ********* 2025-08-29 17:24:56.695000 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:24:56.695011 | orchestrator | 2025-08-29 17:24:56.695021 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 17:24:56.695032 | orchestrator | Friday 29 August 2025 17:23:21 +0000 (0:00:01.623) 0:00:44.707 ********* 2025-08-29 17:24:56.695043 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:24:56.695054 | orchestrator | 2025-08-29 17:24:56.695065 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 17:24:56.695075 | orchestrator | 2025-08-29 17:24:56.695086 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 17:24:56.695097 | orchestrator | Friday 29 August 2025 17:24:14 +0000 (0:00:53.291) 0:01:37.999 ********* 2025-08-29 17:24:56.695108 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:24:56.695119 | orchestrator | 2025-08-29 17:24:56.695130 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 17:24:56.695141 | orchestrator | Friday 29 August 2025 17:24:15 +0000 (0:00:00.697) 0:01:38.696 ********* 2025-08-29 17:24:56.695151 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:24:56.695162 | orchestrator | 2025-08-29 17:24:56.695173 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 17:24:56.695184 | orchestrator | Friday 29 August 2025 17:24:15 +0000 (0:00:00.234) 0:01:38.931 ********* 2025-08-29 17:24:56.695195 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:24:56.695205 | orchestrator | 2025-08-29 17:24:56.695216 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 17:24:56.695227 | orchestrator | Friday 29 August 2025 17:24:22 +0000 (0:00:06.554) 0:01:45.485 ********* 2025-08-29 17:24:56.695238 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:24:56.695249 | orchestrator | 2025-08-29 17:24:56.695260 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 17:24:56.695270 | orchestrator | 2025-08-29 17:24:56.695281 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 17:24:56.695322 | orchestrator | Friday 29 August 2025 17:24:33 +0000 (0:00:10.670) 0:01:56.156 ********* 2025-08-29 17:24:56.695334 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:24:56.695344 | orchestrator | 2025-08-29 17:24:56.695355 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 17:24:56.695366 | orchestrator | Friday 29 August 2025 17:24:33 +0000 (0:00:00.570) 0:01:56.727 ********* 2025-08-29 17:24:56.695377 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:24:56.695387 | orchestrator | 2025-08-29 17:24:56.695398 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 17:24:56.695409 | orchestrator | Friday 29 August 2025 17:24:33 +0000 (0:00:00.232) 0:01:56.960 ********* 2025-08-29 17:24:56.695420 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:24:56.695431 | orchestrator | 2025-08-29 17:24:56.695441 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 17:24:56.695459 | orchestrator | Friday 29 August 2025 17:24:41 +0000 (0:00:07.277) 0:02:04.238 ********* 2025-08-29 17:24:56.695470 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:24:56.695480 | orchestrator | 2025-08-29 17:24:56.695491 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-08-29 17:24:56.695502 | orchestrator | 2025-08-29 17:24:56.695513 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-08-29 17:24:56.695523 | orchestrator | Friday 29 August 2025 17:24:50 +0000 (0:00:08.925) 0:02:13.164 ********* 2025-08-29 17:24:56.695534 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:24:56.695545 | orchestrator | 2025-08-29 17:24:56.695556 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-08-29 17:24:56.695573 | orchestrator | Friday 29 August 2025 17:24:50 +0000 (0:00:00.888) 0:02:14.053 ********* 2025-08-29 17:24:56.695584 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 17:24:56.695595 | orchestrator | enable_outward_rabbitmq_True 2025-08-29 17:24:56.695606 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 17:24:56.695617 | orchestrator | outward_rabbitmq_restart 2025-08-29 17:24:56.695627 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:24:56.695638 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:24:56.695649 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:24:56.695660 | orchestrator | 2025-08-29 17:24:56.695671 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-08-29 17:24:56.695682 | orchestrator | skipping: no hosts matched 2025-08-29 17:24:56.695692 | orchestrator | 2025-08-29 17:24:56.695703 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-08-29 17:24:56.695714 | orchestrator | skipping: no hosts matched 2025-08-29 17:24:56.695725 | orchestrator | 2025-08-29 17:24:56.695735 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-08-29 17:24:56.695746 | orchestrator | skipping: no hosts matched 2025-08-29 17:24:56.695757 | orchestrator | 2025-08-29 17:24:56.695768 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:24:56.695779 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-08-29 17:24:56.695802 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 17:24:56.695814 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:24:56.695825 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:24:56.695836 | orchestrator | 2025-08-29 17:24:56.695847 | orchestrator | 2025-08-29 17:24:56.695857 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:24:56.695868 | orchestrator | Friday 29 August 2025 17:24:53 +0000 (0:00:02.495) 0:02:16.548 ********* 2025-08-29 17:24:56.695879 | orchestrator | =============================================================================== 2025-08-29 17:24:56.695890 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 72.89s 2025-08-29 17:24:56.695901 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 15.46s 2025-08-29 17:24:56.695912 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.24s 2025-08-29 17:24:56.695923 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.28s 2025-08-29 17:24:56.695934 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.26s 2025-08-29 17:24:56.695945 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.64s 2025-08-29 17:24:56.695955 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.49s 2025-08-29 17:24:56.695966 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.35s 2025-08-29 17:24:56.695977 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.24s 2025-08-29 17:24:56.695987 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.90s 2025-08-29 17:24:56.695998 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.83s 2025-08-29 17:24:56.696009 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.76s 2025-08-29 17:24:56.696020 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.69s 2025-08-29 17:24:56.696031 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.60s 2025-08-29 17:24:56.696047 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.47s 2025-08-29 17:24:56.696058 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.42s 2025-08-29 17:24:56.696069 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.15s 2025-08-29 17:24:56.696080 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.11s 2025-08-29 17:24:56.696090 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.97s 2025-08-29 17:24:56.696101 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 0.92s 2025-08-29 17:24:56.696112 | orchestrator | 2025-08-29 17:24:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:24:59.737600 | orchestrator | 2025-08-29 17:24:59 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:24:59.739087 | orchestrator | 2025-08-29 17:24:59 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:24:59.742209 | orchestrator | 2025-08-29 17:24:59 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:24:59.742640 | orchestrator | 2025-08-29 17:24:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:02.789032 | orchestrator | 2025-08-29 17:25:02 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:02.789506 | orchestrator | 2025-08-29 17:25:02 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:02.794148 | orchestrator | 2025-08-29 17:25:02 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:02.794206 | orchestrator | 2025-08-29 17:25:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:05.849655 | orchestrator | 2025-08-29 17:25:05 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:05.850376 | orchestrator | 2025-08-29 17:25:05 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:05.851341 | orchestrator | 2025-08-29 17:25:05 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:05.851365 | orchestrator | 2025-08-29 17:25:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:08.884726 | orchestrator | 2025-08-29 17:25:08 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:08.886400 | orchestrator | 2025-08-29 17:25:08 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:08.886494 | orchestrator | 2025-08-29 17:25:08 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:08.886509 | orchestrator | 2025-08-29 17:25:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:11.921520 | orchestrator | 2025-08-29 17:25:11 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:11.921842 | orchestrator | 2025-08-29 17:25:11 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:11.922720 | orchestrator | 2025-08-29 17:25:11 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:11.922744 | orchestrator | 2025-08-29 17:25:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:14.971777 | orchestrator | 2025-08-29 17:25:14 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:14.972146 | orchestrator | 2025-08-29 17:25:14 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:14.973059 | orchestrator | 2025-08-29 17:25:14 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:14.973280 | orchestrator | 2025-08-29 17:25:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:18.017962 | orchestrator | 2025-08-29 17:25:18 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:18.020030 | orchestrator | 2025-08-29 17:25:18 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:18.021472 | orchestrator | 2025-08-29 17:25:18 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:18.021592 | orchestrator | 2025-08-29 17:25:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:21.066986 | orchestrator | 2025-08-29 17:25:21 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:21.069999 | orchestrator | 2025-08-29 17:25:21 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:21.072125 | orchestrator | 2025-08-29 17:25:21 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:21.072611 | orchestrator | 2025-08-29 17:25:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:24.118410 | orchestrator | 2025-08-29 17:25:24 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:24.118512 | orchestrator | 2025-08-29 17:25:24 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:24.119191 | orchestrator | 2025-08-29 17:25:24 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:24.119219 | orchestrator | 2025-08-29 17:25:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:27.159659 | orchestrator | 2025-08-29 17:25:27 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:27.161046 | orchestrator | 2025-08-29 17:25:27 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:27.162428 | orchestrator | 2025-08-29 17:25:27 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:27.162503 | orchestrator | 2025-08-29 17:25:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:30.198327 | orchestrator | 2025-08-29 17:25:30 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:30.199832 | orchestrator | 2025-08-29 17:25:30 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:30.202372 | orchestrator | 2025-08-29 17:25:30 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:30.202409 | orchestrator | 2025-08-29 17:25:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:33.243590 | orchestrator | 2025-08-29 17:25:33 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:33.245136 | orchestrator | 2025-08-29 17:25:33 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:33.246974 | orchestrator | 2025-08-29 17:25:33 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:33.247063 | orchestrator | 2025-08-29 17:25:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:36.283616 | orchestrator | 2025-08-29 17:25:36 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:36.284950 | orchestrator | 2025-08-29 17:25:36 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:36.286476 | orchestrator | 2025-08-29 17:25:36 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:36.286498 | orchestrator | 2025-08-29 17:25:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:39.329880 | orchestrator | 2025-08-29 17:25:39 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:39.330934 | orchestrator | 2025-08-29 17:25:39 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:39.332390 | orchestrator | 2025-08-29 17:25:39 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:39.332592 | orchestrator | 2025-08-29 17:25:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:42.370506 | orchestrator | 2025-08-29 17:25:42 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:42.372802 | orchestrator | 2025-08-29 17:25:42 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:42.373839 | orchestrator | 2025-08-29 17:25:42 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:42.374737 | orchestrator | 2025-08-29 17:25:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:45.416434 | orchestrator | 2025-08-29 17:25:45 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:45.417841 | orchestrator | 2025-08-29 17:25:45 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:45.419236 | orchestrator | 2025-08-29 17:25:45 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:45.419636 | orchestrator | 2025-08-29 17:25:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:48.456770 | orchestrator | 2025-08-29 17:25:48 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:48.458389 | orchestrator | 2025-08-29 17:25:48 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:48.461011 | orchestrator | 2025-08-29 17:25:48 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:48.461056 | orchestrator | 2025-08-29 17:25:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:51.499422 | orchestrator | 2025-08-29 17:25:51 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:51.500162 | orchestrator | 2025-08-29 17:25:51 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:51.501275 | orchestrator | 2025-08-29 17:25:51 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:51.501424 | orchestrator | 2025-08-29 17:25:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:54.547003 | orchestrator | 2025-08-29 17:25:54 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state STARTED 2025-08-29 17:25:54.547187 | orchestrator | 2025-08-29 17:25:54 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:54.550961 | orchestrator | 2025-08-29 17:25:54 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:54.550989 | orchestrator | 2025-08-29 17:25:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:25:57.591812 | orchestrator | 2025-08-29 17:25:57 | INFO  | Task b40d9d2e-685c-4222-8b31-6f79e54047a7 is in state SUCCESS 2025-08-29 17:25:57.592812 | orchestrator | 2025-08-29 17:25:57.592850 | orchestrator | 2025-08-29 17:25:57.592864 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:25:57.592876 | orchestrator | 2025-08-29 17:25:57.592887 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:25:57.592899 | orchestrator | Friday 29 August 2025 17:23:27 +0000 (0:00:00.197) 0:00:00.197 ********* 2025-08-29 17:25:57.592910 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:25:57.592945 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:25:57.592956 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:25:57.592967 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.592978 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.592988 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.592999 | orchestrator | 2025-08-29 17:25:57.593010 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:25:57.593021 | orchestrator | Friday 29 August 2025 17:23:29 +0000 (0:00:01.140) 0:00:01.338 ********* 2025-08-29 17:25:57.593032 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-08-29 17:25:57.593043 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-08-29 17:25:57.593054 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-08-29 17:25:57.593064 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-08-29 17:25:57.593075 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-08-29 17:25:57.593086 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-08-29 17:25:57.593097 | orchestrator | 2025-08-29 17:25:57.593108 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-08-29 17:25:57.593264 | orchestrator | 2025-08-29 17:25:57.593292 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-08-29 17:25:57.593328 | orchestrator | Friday 29 August 2025 17:23:30 +0000 (0:00:01.553) 0:00:02.891 ********* 2025-08-29 17:25:57.593341 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:25:57.593353 | orchestrator | 2025-08-29 17:25:57.593364 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-08-29 17:25:57.593375 | orchestrator | Friday 29 August 2025 17:23:31 +0000 (0:00:01.290) 0:00:04.182 ********* 2025-08-29 17:25:57.593388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593402 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593416 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593477 | orchestrator | 2025-08-29 17:25:57.593502 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-08-29 17:25:57.593516 | orchestrator | Friday 29 August 2025 17:23:33 +0000 (0:00:01.354) 0:00:05.536 ********* 2025-08-29 17:25:57.593529 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593573 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593610 | orchestrator | 2025-08-29 17:25:57.593622 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-08-29 17:25:57.593635 | orchestrator | Friday 29 August 2025 17:23:34 +0000 (0:00:01.469) 0:00:07.006 ********* 2025-08-29 17:25:57.593647 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593743 | orchestrator | 2025-08-29 17:25:57.593756 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-08-29 17:25:57.593768 | orchestrator | Friday 29 August 2025 17:23:35 +0000 (0:00:01.100) 0:00:08.106 ********* 2025-08-29 17:25:57.593779 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593790 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593851 | orchestrator | 2025-08-29 17:25:57.593867 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-08-29 17:25:57.593879 | orchestrator | Friday 29 August 2025 17:23:37 +0000 (0:00:01.535) 0:00:09.642 ********* 2025-08-29 17:25:57.593890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593916 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.593968 | orchestrator | 2025-08-29 17:25:57.593979 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-08-29 17:25:57.593990 | orchestrator | Friday 29 August 2025 17:23:38 +0000 (0:00:01.216) 0:00:10.858 ********* 2025-08-29 17:25:57.594001 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:25:57.594012 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:25:57.594089 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:25:57.594137 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:25:57.594149 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:25:57.594160 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:25:57.594171 | orchestrator | 2025-08-29 17:25:57.594182 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-08-29 17:25:57.594193 | orchestrator | Friday 29 August 2025 17:23:41 +0000 (0:00:03.026) 0:00:13.885 ********* 2025-08-29 17:25:57.594204 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-08-29 17:25:57.594215 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-08-29 17:25:57.594226 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-08-29 17:25:57.594237 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-08-29 17:25:57.594247 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-08-29 17:25:57.594258 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-08-29 17:25:57.594269 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 17:25:57.594280 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 17:25:57.594297 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 17:25:57.594328 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 17:25:57.594340 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 17:25:57.594350 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 17:25:57.594361 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 17:25:57.594374 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 17:25:57.594385 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 17:25:57.594396 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 17:25:57.594407 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 17:25:57.594431 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 17:25:57.594442 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 17:25:57.594454 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 17:25:57.594465 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 17:25:57.594484 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 17:25:57.594495 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 17:25:57.594505 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 17:25:57.594516 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 17:25:57.594527 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 17:25:57.594538 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 17:25:57.594549 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 17:25:57.594560 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 17:25:57.594570 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 17:25:57.594581 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 17:25:57.594592 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 17:25:57.594603 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 17:25:57.594614 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 17:25:57.594625 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 17:25:57.594636 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 17:25:57.594647 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 17:25:57.594658 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 17:25:57.594668 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 17:25:57.594679 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 17:25:57.594690 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 17:25:57.594701 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-08-29 17:25:57.594713 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 17:25:57.594724 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-08-29 17:25:57.594739 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-08-29 17:25:57.594751 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-08-29 17:25:57.594762 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-08-29 17:25:57.594773 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 17:25:57.594784 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-08-29 17:25:57.594795 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 17:25:57.594812 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 17:25:57.594823 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 17:25:57.594833 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 17:25:57.594849 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 17:25:57.594860 | orchestrator | 2025-08-29 17:25:57.594871 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 17:25:57.594882 | orchestrator | Friday 29 August 2025 17:23:59 +0000 (0:00:18.083) 0:00:31.969 ********* 2025-08-29 17:25:57.594893 | orchestrator | 2025-08-29 17:25:57.594904 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 17:25:57.594915 | orchestrator | Friday 29 August 2025 17:24:00 +0000 (0:00:00.376) 0:00:32.345 ********* 2025-08-29 17:25:57.594926 | orchestrator | 2025-08-29 17:25:57.594937 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 17:25:57.594948 | orchestrator | Friday 29 August 2025 17:24:00 +0000 (0:00:00.080) 0:00:32.426 ********* 2025-08-29 17:25:57.594958 | orchestrator | 2025-08-29 17:25:57.594969 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 17:25:57.594980 | orchestrator | Friday 29 August 2025 17:24:00 +0000 (0:00:00.081) 0:00:32.508 ********* 2025-08-29 17:25:57.594991 | orchestrator | 2025-08-29 17:25:57.595002 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 17:25:57.595012 | orchestrator | Friday 29 August 2025 17:24:00 +0000 (0:00:00.073) 0:00:32.581 ********* 2025-08-29 17:25:57.595023 | orchestrator | 2025-08-29 17:25:57.595034 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 17:25:57.595045 | orchestrator | Friday 29 August 2025 17:24:00 +0000 (0:00:00.139) 0:00:32.720 ********* 2025-08-29 17:25:57.595055 | orchestrator | 2025-08-29 17:25:57.595066 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-08-29 17:25:57.595077 | orchestrator | Friday 29 August 2025 17:24:00 +0000 (0:00:00.096) 0:00:32.817 ********* 2025-08-29 17:25:57.595088 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:25:57.595099 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:25:57.595110 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.595120 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:25:57.595131 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.595142 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.595153 | orchestrator | 2025-08-29 17:25:57.595163 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-08-29 17:25:57.595174 | orchestrator | Friday 29 August 2025 17:24:02 +0000 (0:00:02.203) 0:00:35.020 ********* 2025-08-29 17:25:57.595210 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:25:57.595221 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:25:57.595232 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:25:57.595242 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:25:57.595253 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:25:57.595264 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:25:57.595274 | orchestrator | 2025-08-29 17:25:57.595285 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-08-29 17:25:57.595296 | orchestrator | 2025-08-29 17:25:57.595323 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 17:25:57.595335 | orchestrator | Friday 29 August 2025 17:24:41 +0000 (0:00:38.895) 0:01:13.916 ********* 2025-08-29 17:25:57.595345 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:25:57.595356 | orchestrator | 2025-08-29 17:25:57.595367 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 17:25:57.595386 | orchestrator | Friday 29 August 2025 17:24:42 +0000 (0:00:00.811) 0:01:14.728 ********* 2025-08-29 17:25:57.595397 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:25:57.595408 | orchestrator | 2025-08-29 17:25:57.595418 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-08-29 17:25:57.595429 | orchestrator | Friday 29 August 2025 17:24:43 +0000 (0:00:00.625) 0:01:15.353 ********* 2025-08-29 17:25:57.595440 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.595451 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.595461 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.595472 | orchestrator | 2025-08-29 17:25:57.595483 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-08-29 17:25:57.595493 | orchestrator | Friday 29 August 2025 17:24:44 +0000 (0:00:01.095) 0:01:16.448 ********* 2025-08-29 17:25:57.595504 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.595515 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.595525 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.595542 | orchestrator | 2025-08-29 17:25:57.595553 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-08-29 17:25:57.595564 | orchestrator | Friday 29 August 2025 17:24:44 +0000 (0:00:00.386) 0:01:16.835 ********* 2025-08-29 17:25:57.595575 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.595585 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.595596 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.595607 | orchestrator | 2025-08-29 17:25:57.595617 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-08-29 17:25:57.595628 | orchestrator | Friday 29 August 2025 17:24:44 +0000 (0:00:00.340) 0:01:17.176 ********* 2025-08-29 17:25:57.595639 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.595649 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.595660 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.595670 | orchestrator | 2025-08-29 17:25:57.595681 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-08-29 17:25:57.595692 | orchestrator | Friday 29 August 2025 17:24:45 +0000 (0:00:00.348) 0:01:17.525 ********* 2025-08-29 17:25:57.595702 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.595713 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.595724 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.595734 | orchestrator | 2025-08-29 17:25:57.595745 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-08-29 17:25:57.595755 | orchestrator | Friday 29 August 2025 17:24:45 +0000 (0:00:00.648) 0:01:18.174 ********* 2025-08-29 17:25:57.595766 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.595776 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.595787 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.595797 | orchestrator | 2025-08-29 17:25:57.595813 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-08-29 17:25:57.595824 | orchestrator | Friday 29 August 2025 17:24:46 +0000 (0:00:00.304) 0:01:18.478 ********* 2025-08-29 17:25:57.595835 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.595846 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.595856 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.595866 | orchestrator | 2025-08-29 17:25:57.595877 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-08-29 17:25:57.595888 | orchestrator | Friday 29 August 2025 17:24:46 +0000 (0:00:00.307) 0:01:18.786 ********* 2025-08-29 17:25:57.595898 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.595909 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.595920 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.595930 | orchestrator | 2025-08-29 17:25:57.595941 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-08-29 17:25:57.595952 | orchestrator | Friday 29 August 2025 17:24:46 +0000 (0:00:00.315) 0:01:19.101 ********* 2025-08-29 17:25:57.595962 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.595979 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.595989 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.596000 | orchestrator | 2025-08-29 17:25:57.596011 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-08-29 17:25:57.596021 | orchestrator | Friday 29 August 2025 17:24:47 +0000 (0:00:00.544) 0:01:19.645 ********* 2025-08-29 17:25:57.596032 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.596043 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.596053 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.596064 | orchestrator | 2025-08-29 17:25:57.596075 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-08-29 17:25:57.596086 | orchestrator | Friday 29 August 2025 17:24:47 +0000 (0:00:00.337) 0:01:19.982 ********* 2025-08-29 17:25:57.596096 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.596107 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.596117 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.596128 | orchestrator | 2025-08-29 17:25:57.596139 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-08-29 17:25:57.596150 | orchestrator | Friday 29 August 2025 17:24:48 +0000 (0:00:00.299) 0:01:20.282 ********* 2025-08-29 17:25:57.596160 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.596171 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.596182 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.596192 | orchestrator | 2025-08-29 17:25:57.596203 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-08-29 17:25:57.596213 | orchestrator | Friday 29 August 2025 17:24:48 +0000 (0:00:00.326) 0:01:20.608 ********* 2025-08-29 17:25:57.596224 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.596235 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.596245 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.596256 | orchestrator | 2025-08-29 17:25:57.596266 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-08-29 17:25:57.596277 | orchestrator | Friday 29 August 2025 17:24:48 +0000 (0:00:00.316) 0:01:20.925 ********* 2025-08-29 17:25:57.596288 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.596298 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.596323 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.596334 | orchestrator | 2025-08-29 17:25:57.596345 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-08-29 17:25:57.596356 | orchestrator | Friday 29 August 2025 17:24:49 +0000 (0:00:00.549) 0:01:21.475 ********* 2025-08-29 17:25:57.596366 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.596377 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.596387 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.596398 | orchestrator | 2025-08-29 17:25:57.596409 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-08-29 17:25:57.596419 | orchestrator | Friday 29 August 2025 17:24:49 +0000 (0:00:00.361) 0:01:21.836 ********* 2025-08-29 17:25:57.596430 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.596441 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.596451 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.596462 | orchestrator | 2025-08-29 17:25:57.596472 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-08-29 17:25:57.596483 | orchestrator | Friday 29 August 2025 17:24:50 +0000 (0:00:00.385) 0:01:22.221 ********* 2025-08-29 17:25:57.596494 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.596505 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.596521 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.596532 | orchestrator | 2025-08-29 17:25:57.596543 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 17:25:57.596553 | orchestrator | Friday 29 August 2025 17:24:50 +0000 (0:00:00.553) 0:01:22.775 ********* 2025-08-29 17:25:57.596564 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:25:57.596581 | orchestrator | 2025-08-29 17:25:57.596592 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-08-29 17:25:57.596603 | orchestrator | Friday 29 August 2025 17:24:51 +0000 (0:00:01.288) 0:01:24.063 ********* 2025-08-29 17:25:57.596614 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.596625 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.596635 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.596646 | orchestrator | 2025-08-29 17:25:57.596657 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-08-29 17:25:57.596667 | orchestrator | Friday 29 August 2025 17:24:52 +0000 (0:00:00.573) 0:01:24.636 ********* 2025-08-29 17:25:57.596678 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.596689 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.596699 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.596709 | orchestrator | 2025-08-29 17:25:57.596720 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-08-29 17:25:57.596731 | orchestrator | Friday 29 August 2025 17:24:52 +0000 (0:00:00.559) 0:01:25.196 ********* 2025-08-29 17:25:57.596742 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.596752 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.596763 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.596774 | orchestrator | 2025-08-29 17:25:57.596789 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-08-29 17:25:57.596800 | orchestrator | Friday 29 August 2025 17:24:53 +0000 (0:00:00.571) 0:01:25.767 ********* 2025-08-29 17:25:57.596811 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.596821 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.596832 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.596842 | orchestrator | 2025-08-29 17:25:57.596853 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-08-29 17:25:57.596863 | orchestrator | Friday 29 August 2025 17:24:53 +0000 (0:00:00.342) 0:01:26.109 ********* 2025-08-29 17:25:57.596874 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.596885 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.596895 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.596906 | orchestrator | 2025-08-29 17:25:57.596917 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-08-29 17:25:57.596927 | orchestrator | Friday 29 August 2025 17:24:54 +0000 (0:00:00.386) 0:01:26.496 ********* 2025-08-29 17:25:57.596938 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.596949 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.596960 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.596970 | orchestrator | 2025-08-29 17:25:57.596981 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-08-29 17:25:57.596992 | orchestrator | Friday 29 August 2025 17:24:54 +0000 (0:00:00.369) 0:01:26.866 ********* 2025-08-29 17:25:57.597003 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.597013 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.597023 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.597034 | orchestrator | 2025-08-29 17:25:57.597045 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-08-29 17:25:57.597056 | orchestrator | Friday 29 August 2025 17:24:55 +0000 (0:00:00.543) 0:01:27.410 ********* 2025-08-29 17:25:57.597066 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.597077 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.597088 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.597098 | orchestrator | 2025-08-29 17:25:57.597109 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-08-29 17:25:57.597119 | orchestrator | Friday 29 August 2025 17:24:55 +0000 (0:00:00.361) 0:01:27.771 ********* 2025-08-29 17:25:57.597131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597256 | orchestrator | 2025-08-29 17:25:57.597267 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-08-29 17:25:57.597278 | orchestrator | Friday 29 August 2025 17:24:57 +0000 (0:00:01.452) 0:01:29.224 ********* 2025-08-29 17:25:57.597290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597437 | orchestrator | 2025-08-29 17:25:57.597448 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-08-29 17:25:57.597459 | orchestrator | Friday 29 August 2025 17:25:00 +0000 (0:00:03.972) 0:01:33.197 ********* 2025-08-29 17:25:57.597470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.597588 | orchestrator | 2025-08-29 17:25:57.597600 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 17:25:57.597610 | orchestrator | Friday 29 August 2025 17:25:02 +0000 (0:00:01.965) 0:01:35.163 ********* 2025-08-29 17:25:57.597621 | orchestrator | 2025-08-29 17:25:57.597632 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 17:25:57.597643 | orchestrator | Friday 29 August 2025 17:25:03 +0000 (0:00:00.331) 0:01:35.494 ********* 2025-08-29 17:25:57.597654 | orchestrator | 2025-08-29 17:25:57.597664 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 17:25:57.597684 | orchestrator | Friday 29 August 2025 17:25:03 +0000 (0:00:00.140) 0:01:35.635 ********* 2025-08-29 17:25:57.597695 | orchestrator | 2025-08-29 17:25:57.597705 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-08-29 17:25:57.597716 | orchestrator | Friday 29 August 2025 17:25:03 +0000 (0:00:00.142) 0:01:35.778 ********* 2025-08-29 17:25:57.597727 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:25:57.597738 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:25:57.597748 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:25:57.597759 | orchestrator | 2025-08-29 17:25:57.597770 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-08-29 17:25:57.597780 | orchestrator | Friday 29 August 2025 17:25:06 +0000 (0:00:02.924) 0:01:38.703 ********* 2025-08-29 17:25:57.597791 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:25:57.597802 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:25:57.597812 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:25:57.597823 | orchestrator | 2025-08-29 17:25:57.597834 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-08-29 17:25:57.597845 | orchestrator | Friday 29 August 2025 17:25:09 +0000 (0:00:02.775) 0:01:41.478 ********* 2025-08-29 17:25:57.597855 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:25:57.597866 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:25:57.597876 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:25:57.597887 | orchestrator | 2025-08-29 17:25:57.597898 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-08-29 17:25:57.597908 | orchestrator | Friday 29 August 2025 17:25:16 +0000 (0:00:07.564) 0:01:49.043 ********* 2025-08-29 17:25:57.597919 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.597930 | orchestrator | 2025-08-29 17:25:57.597940 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-08-29 17:25:57.597951 | orchestrator | Friday 29 August 2025 17:25:17 +0000 (0:00:00.341) 0:01:49.384 ********* 2025-08-29 17:25:57.597962 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.597973 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.597984 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.597994 | orchestrator | 2025-08-29 17:25:57.598005 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-08-29 17:25:57.598049 | orchestrator | Friday 29 August 2025 17:25:18 +0000 (0:00:00.974) 0:01:50.359 ********* 2025-08-29 17:25:57.598063 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.598074 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.598085 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:25:57.598095 | orchestrator | 2025-08-29 17:25:57.598106 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-08-29 17:25:57.598117 | orchestrator | Friday 29 August 2025 17:25:18 +0000 (0:00:00.630) 0:01:50.990 ********* 2025-08-29 17:25:57.598128 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.598139 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.598149 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.598159 | orchestrator | 2025-08-29 17:25:57.598170 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-08-29 17:25:57.598181 | orchestrator | Friday 29 August 2025 17:25:19 +0000 (0:00:00.807) 0:01:51.798 ********* 2025-08-29 17:25:57.598191 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.598202 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.598213 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:25:57.598223 | orchestrator | 2025-08-29 17:25:57.598234 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-08-29 17:25:57.598245 | orchestrator | Friday 29 August 2025 17:25:20 +0000 (0:00:00.642) 0:01:52.441 ********* 2025-08-29 17:25:57.598256 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.598266 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.598283 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.598295 | orchestrator | 2025-08-29 17:25:57.598321 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-08-29 17:25:57.598339 | orchestrator | Friday 29 August 2025 17:25:21 +0000 (0:00:01.088) 0:01:53.530 ********* 2025-08-29 17:25:57.598350 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.598360 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.598371 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.598381 | orchestrator | 2025-08-29 17:25:57.598392 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-08-29 17:25:57.598403 | orchestrator | Friday 29 August 2025 17:25:22 +0000 (0:00:00.730) 0:01:54.261 ********* 2025-08-29 17:25:57.598413 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.598424 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.598434 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.598445 | orchestrator | 2025-08-29 17:25:57.598455 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-08-29 17:25:57.598466 | orchestrator | Friday 29 August 2025 17:25:22 +0000 (0:00:00.313) 0:01:54.575 ********* 2025-08-29 17:25:57.598477 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598494 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598505 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598517 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598528 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598539 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598550 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598562 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598590 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598602 | orchestrator | 2025-08-29 17:25:57.598613 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-08-29 17:25:57.598624 | orchestrator | Friday 29 August 2025 17:25:23 +0000 (0:00:01.412) 0:01:55.987 ********* 2025-08-29 17:25:57.598635 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598646 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598671 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598683 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598717 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598756 | orchestrator | 2025-08-29 17:25:57.598767 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-08-29 17:25:57.598778 | orchestrator | Friday 29 August 2025 17:25:28 +0000 (0:00:04.926) 0:02:00.914 ********* 2025-08-29 17:25:57.598794 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598806 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598817 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598844 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598894 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:25:57.598905 | orchestrator | 2025-08-29 17:25:57.598916 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 17:25:57.598927 | orchestrator | Friday 29 August 2025 17:25:31 +0000 (0:00:02.569) 0:02:03.483 ********* 2025-08-29 17:25:57.598937 | orchestrator | 2025-08-29 17:25:57.598948 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 17:25:57.598959 | orchestrator | Friday 29 August 2025 17:25:31 +0000 (0:00:00.081) 0:02:03.564 ********* 2025-08-29 17:25:57.598970 | orchestrator | 2025-08-29 17:25:57.598980 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 17:25:57.598991 | orchestrator | Friday 29 August 2025 17:25:31 +0000 (0:00:00.092) 0:02:03.656 ********* 2025-08-29 17:25:57.599001 | orchestrator | 2025-08-29 17:25:57.599012 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-08-29 17:25:57.599023 | orchestrator | Friday 29 August 2025 17:25:31 +0000 (0:00:00.067) 0:02:03.724 ********* 2025-08-29 17:25:57.599034 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:25:57.599044 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:25:57.599055 | orchestrator | 2025-08-29 17:25:57.599071 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-08-29 17:25:57.599082 | orchestrator | Friday 29 August 2025 17:25:37 +0000 (0:00:06.137) 0:02:09.861 ********* 2025-08-29 17:25:57.599092 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:25:57.599103 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:25:57.599114 | orchestrator | 2025-08-29 17:25:57.599124 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-08-29 17:25:57.599135 | orchestrator | Friday 29 August 2025 17:25:43 +0000 (0:00:06.122) 0:02:15.984 ********* 2025-08-29 17:25:57.599146 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:25:57.599156 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:25:57.599167 | orchestrator | 2025-08-29 17:25:57.599178 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-08-29 17:25:57.599188 | orchestrator | Friday 29 August 2025 17:25:50 +0000 (0:00:06.876) 0:02:22.861 ********* 2025-08-29 17:25:57.599199 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:57.599210 | orchestrator | 2025-08-29 17:25:57.599220 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-08-29 17:25:57.599231 | orchestrator | Friday 29 August 2025 17:25:50 +0000 (0:00:00.136) 0:02:22.997 ********* 2025-08-29 17:25:57.599242 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.599252 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.599263 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.599274 | orchestrator | 2025-08-29 17:25:57.599284 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-08-29 17:25:57.599295 | orchestrator | Friday 29 August 2025 17:25:51 +0000 (0:00:00.797) 0:02:23.794 ********* 2025-08-29 17:25:57.599350 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.599362 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.599373 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:25:57.599384 | orchestrator | 2025-08-29 17:25:57.599394 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-08-29 17:25:57.599405 | orchestrator | Friday 29 August 2025 17:25:52 +0000 (0:00:00.590) 0:02:24.385 ********* 2025-08-29 17:25:57.599416 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.599427 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.599438 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.599449 | orchestrator | 2025-08-29 17:25:57.599459 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-08-29 17:25:57.599477 | orchestrator | Friday 29 August 2025 17:25:52 +0000 (0:00:00.761) 0:02:25.147 ********* 2025-08-29 17:25:57.599488 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:57.599498 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:57.599509 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:25:57.599520 | orchestrator | 2025-08-29 17:25:57.599531 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-08-29 17:25:57.599541 | orchestrator | Friday 29 August 2025 17:25:53 +0000 (0:00:00.883) 0:02:26.030 ********* 2025-08-29 17:25:57.599552 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.599563 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.599574 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.599585 | orchestrator | 2025-08-29 17:25:57.599595 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-08-29 17:25:57.599606 | orchestrator | Friday 29 August 2025 17:25:54 +0000 (0:00:00.717) 0:02:26.748 ********* 2025-08-29 17:25:57.599617 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:57.599628 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:57.599639 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:57.599649 | orchestrator | 2025-08-29 17:25:57.599660 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:25:57.599671 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 17:25:57.599747 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-08-29 17:25:57.599769 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-08-29 17:25:57.599780 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:25:57.599791 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:25:57.599802 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:25:57.599813 | orchestrator | 2025-08-29 17:25:57.599824 | orchestrator | 2025-08-29 17:25:57.599835 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:25:57.599845 | orchestrator | Friday 29 August 2025 17:25:55 +0000 (0:00:00.946) 0:02:27.694 ********* 2025-08-29 17:25:57.599856 | orchestrator | =============================================================================== 2025-08-29 17:25:57.599867 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 38.90s 2025-08-29 17:25:57.599877 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.08s 2025-08-29 17:25:57.599886 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.44s 2025-08-29 17:25:57.599896 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.06s 2025-08-29 17:25:57.599905 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.90s 2025-08-29 17:25:57.599915 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.93s 2025-08-29 17:25:57.599924 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.97s 2025-08-29 17:25:57.599941 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.03s 2025-08-29 17:25:57.599951 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.57s 2025-08-29 17:25:57.599960 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.20s 2025-08-29 17:25:57.599969 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.97s 2025-08-29 17:25:57.599985 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.55s 2025-08-29 17:25:57.599994 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.54s 2025-08-29 17:25:57.600004 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.47s 2025-08-29 17:25:57.600013 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2025-08-29 17:25:57.600023 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.41s 2025-08-29 17:25:57.600033 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.35s 2025-08-29 17:25:57.600042 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.29s 2025-08-29 17:25:57.600052 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.29s 2025-08-29 17:25:57.600061 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.22s 2025-08-29 17:25:57.600075 | orchestrator | 2025-08-29 17:25:57 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:25:57.600084 | orchestrator | 2025-08-29 17:25:57 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:25:57.600094 | orchestrator | 2025-08-29 17:25:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:00.624591 | orchestrator | 2025-08-29 17:26:00 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:00.626671 | orchestrator | 2025-08-29 17:26:00 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:00.626697 | orchestrator | 2025-08-29 17:26:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:03.666792 | orchestrator | 2025-08-29 17:26:03 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:03.668268 | orchestrator | 2025-08-29 17:26:03 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:03.668342 | orchestrator | 2025-08-29 17:26:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:06.712914 | orchestrator | 2025-08-29 17:26:06 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:06.713977 | orchestrator | 2025-08-29 17:26:06 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:06.714006 | orchestrator | 2025-08-29 17:26:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:09.752021 | orchestrator | 2025-08-29 17:26:09 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:09.752384 | orchestrator | 2025-08-29 17:26:09 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:09.752711 | orchestrator | 2025-08-29 17:26:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:12.802640 | orchestrator | 2025-08-29 17:26:12 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:12.802746 | orchestrator | 2025-08-29 17:26:12 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:12.802761 | orchestrator | 2025-08-29 17:26:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:15.837932 | orchestrator | 2025-08-29 17:26:15 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:15.842637 | orchestrator | 2025-08-29 17:26:15 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:15.842694 | orchestrator | 2025-08-29 17:26:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:18.874214 | orchestrator | 2025-08-29 17:26:18 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:18.874935 | orchestrator | 2025-08-29 17:26:18 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:18.874974 | orchestrator | 2025-08-29 17:26:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:21.920033 | orchestrator | 2025-08-29 17:26:21 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:21.921613 | orchestrator | 2025-08-29 17:26:21 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:21.921813 | orchestrator | 2025-08-29 17:26:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:24.967653 | orchestrator | 2025-08-29 17:26:24 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:24.969401 | orchestrator | 2025-08-29 17:26:24 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:24.969963 | orchestrator | 2025-08-29 17:26:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:28.021129 | orchestrator | 2025-08-29 17:26:28 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:28.021246 | orchestrator | 2025-08-29 17:26:28 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:28.021629 | orchestrator | 2025-08-29 17:26:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:31.065926 | orchestrator | 2025-08-29 17:26:31 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:31.067496 | orchestrator | 2025-08-29 17:26:31 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:31.067630 | orchestrator | 2025-08-29 17:26:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:34.120047 | orchestrator | 2025-08-29 17:26:34 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:34.121741 | orchestrator | 2025-08-29 17:26:34 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:34.122399 | orchestrator | 2025-08-29 17:26:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:37.184247 | orchestrator | 2025-08-29 17:26:37 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:37.184915 | orchestrator | 2025-08-29 17:26:37 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:37.184947 | orchestrator | 2025-08-29 17:26:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:40.232214 | orchestrator | 2025-08-29 17:26:40 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:40.233516 | orchestrator | 2025-08-29 17:26:40 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:40.233549 | orchestrator | 2025-08-29 17:26:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:43.284113 | orchestrator | 2025-08-29 17:26:43 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:43.287296 | orchestrator | 2025-08-29 17:26:43 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:43.287353 | orchestrator | 2025-08-29 17:26:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:46.345955 | orchestrator | 2025-08-29 17:26:46 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:46.346922 | orchestrator | 2025-08-29 17:26:46 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:46.347392 | orchestrator | 2025-08-29 17:26:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:49.395680 | orchestrator | 2025-08-29 17:26:49 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:49.396414 | orchestrator | 2025-08-29 17:26:49 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:49.397533 | orchestrator | 2025-08-29 17:26:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:52.446092 | orchestrator | 2025-08-29 17:26:52 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:52.446668 | orchestrator | 2025-08-29 17:26:52 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:52.446966 | orchestrator | 2025-08-29 17:26:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:55.499647 | orchestrator | 2025-08-29 17:26:55 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:55.500524 | orchestrator | 2025-08-29 17:26:55 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:55.500707 | orchestrator | 2025-08-29 17:26:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:26:58.541207 | orchestrator | 2025-08-29 17:26:58 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:26:58.542131 | orchestrator | 2025-08-29 17:26:58 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:26:58.542340 | orchestrator | 2025-08-29 17:26:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:01.593675 | orchestrator | 2025-08-29 17:27:01 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:01.594612 | orchestrator | 2025-08-29 17:27:01 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:01.594643 | orchestrator | 2025-08-29 17:27:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:04.631399 | orchestrator | 2025-08-29 17:27:04 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:04.632987 | orchestrator | 2025-08-29 17:27:04 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:04.633017 | orchestrator | 2025-08-29 17:27:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:07.690302 | orchestrator | 2025-08-29 17:27:07 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:07.692134 | orchestrator | 2025-08-29 17:27:07 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:07.692166 | orchestrator | 2025-08-29 17:27:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:10.735582 | orchestrator | 2025-08-29 17:27:10 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:10.737997 | orchestrator | 2025-08-29 17:27:10 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:10.738102 | orchestrator | 2025-08-29 17:27:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:13.791680 | orchestrator | 2025-08-29 17:27:13 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:13.794209 | orchestrator | 2025-08-29 17:27:13 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:13.794241 | orchestrator | 2025-08-29 17:27:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:16.843809 | orchestrator | 2025-08-29 17:27:16 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:16.844816 | orchestrator | 2025-08-29 17:27:16 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:16.844877 | orchestrator | 2025-08-29 17:27:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:19.892071 | orchestrator | 2025-08-29 17:27:19 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:19.893814 | orchestrator | 2025-08-29 17:27:19 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:19.894188 | orchestrator | 2025-08-29 17:27:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:22.937532 | orchestrator | 2025-08-29 17:27:22 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:22.938353 | orchestrator | 2025-08-29 17:27:22 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:22.938594 | orchestrator | 2025-08-29 17:27:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:25.981067 | orchestrator | 2025-08-29 17:27:25 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:25.982113 | orchestrator | 2025-08-29 17:27:25 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:25.982240 | orchestrator | 2025-08-29 17:27:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:29.033303 | orchestrator | 2025-08-29 17:27:29 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:29.033644 | orchestrator | 2025-08-29 17:27:29 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:29.033779 | orchestrator | 2025-08-29 17:27:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:32.073472 | orchestrator | 2025-08-29 17:27:32 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:32.074466 | orchestrator | 2025-08-29 17:27:32 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:32.074495 | orchestrator | 2025-08-29 17:27:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:35.110709 | orchestrator | 2025-08-29 17:27:35 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:35.112819 | orchestrator | 2025-08-29 17:27:35 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:35.112955 | orchestrator | 2025-08-29 17:27:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:38.158459 | orchestrator | 2025-08-29 17:27:38 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:38.160089 | orchestrator | 2025-08-29 17:27:38 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:38.160121 | orchestrator | 2025-08-29 17:27:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:41.220453 | orchestrator | 2025-08-29 17:27:41 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:41.221077 | orchestrator | 2025-08-29 17:27:41 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:41.221111 | orchestrator | 2025-08-29 17:27:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:44.260560 | orchestrator | 2025-08-29 17:27:44 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:44.261739 | orchestrator | 2025-08-29 17:27:44 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:44.261820 | orchestrator | 2025-08-29 17:27:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:47.311817 | orchestrator | 2025-08-29 17:27:47 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:47.313130 | orchestrator | 2025-08-29 17:27:47 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:47.313228 | orchestrator | 2025-08-29 17:27:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:50.355388 | orchestrator | 2025-08-29 17:27:50 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:50.356417 | orchestrator | 2025-08-29 17:27:50 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:50.356441 | orchestrator | 2025-08-29 17:27:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:53.399361 | orchestrator | 2025-08-29 17:27:53 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:53.400661 | orchestrator | 2025-08-29 17:27:53 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:53.400752 | orchestrator | 2025-08-29 17:27:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:56.443519 | orchestrator | 2025-08-29 17:27:56 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:56.445588 | orchestrator | 2025-08-29 17:27:56 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:56.445626 | orchestrator | 2025-08-29 17:27:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:27:59.484171 | orchestrator | 2025-08-29 17:27:59 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:27:59.484258 | orchestrator | 2025-08-29 17:27:59 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:27:59.484272 | orchestrator | 2025-08-29 17:27:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:02.530395 | orchestrator | 2025-08-29 17:28:02 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:28:02.534223 | orchestrator | 2025-08-29 17:28:02 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:02.534261 | orchestrator | 2025-08-29 17:28:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:05.566840 | orchestrator | 2025-08-29 17:28:05 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:28:05.567443 | orchestrator | 2025-08-29 17:28:05 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:05.567481 | orchestrator | 2025-08-29 17:28:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:08.613383 | orchestrator | 2025-08-29 17:28:08 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:28:08.615832 | orchestrator | 2025-08-29 17:28:08 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:08.616205 | orchestrator | 2025-08-29 17:28:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:11.659251 | orchestrator | 2025-08-29 17:28:11 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:28:11.659976 | orchestrator | 2025-08-29 17:28:11 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:11.660007 | orchestrator | 2025-08-29 17:28:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:14.702231 | orchestrator | 2025-08-29 17:28:14 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:28:14.704072 | orchestrator | 2025-08-29 17:28:14 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:14.704102 | orchestrator | 2025-08-29 17:28:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:17.750381 | orchestrator | 2025-08-29 17:28:17 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:28:17.752790 | orchestrator | 2025-08-29 17:28:17 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:17.752895 | orchestrator | 2025-08-29 17:28:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:20.802680 | orchestrator | 2025-08-29 17:28:20 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:28:20.802764 | orchestrator | 2025-08-29 17:28:20 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:20.802779 | orchestrator | 2025-08-29 17:28:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:23.895824 | orchestrator | 2025-08-29 17:28:23 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:28:23.895916 | orchestrator | 2025-08-29 17:28:23 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:23.895931 | orchestrator | 2025-08-29 17:28:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:26.906179 | orchestrator | 2025-08-29 17:28:26 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:28:26.906942 | orchestrator | 2025-08-29 17:28:26 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:26.906973 | orchestrator | 2025-08-29 17:28:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:29.969525 | orchestrator | 2025-08-29 17:28:29 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:28:29.969659 | orchestrator | 2025-08-29 17:28:29 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:29.969927 | orchestrator | 2025-08-29 17:28:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:33.015187 | orchestrator | 2025-08-29 17:28:33 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:28:33.017635 | orchestrator | 2025-08-29 17:28:33 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:33.017670 | orchestrator | 2025-08-29 17:28:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:36.055217 | orchestrator | 2025-08-29 17:28:36 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:28:36.056411 | orchestrator | 2025-08-29 17:28:36 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:36.056449 | orchestrator | 2025-08-29 17:28:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:39.100577 | orchestrator | 2025-08-29 17:28:39 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:28:39.102278 | orchestrator | 2025-08-29 17:28:39 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:39.102384 | orchestrator | 2025-08-29 17:28:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:42.139299 | orchestrator | 2025-08-29 17:28:42 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:28:42.141723 | orchestrator | 2025-08-29 17:28:42 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:42.141807 | orchestrator | 2025-08-29 17:28:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:45.179791 | orchestrator | 2025-08-29 17:28:45 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:28:45.180301 | orchestrator | 2025-08-29 17:28:45 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:45.180409 | orchestrator | 2025-08-29 17:28:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:48.228232 | orchestrator | 2025-08-29 17:28:48 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state STARTED 2025-08-29 17:28:48.231521 | orchestrator | 2025-08-29 17:28:48 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:48.231569 | orchestrator | 2025-08-29 17:28:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:51.286230 | orchestrator | 2025-08-29 17:28:51 | INFO  | Task 6db6fa28-b8a3-4faf-b1c9-f104a6331fe6 is in state SUCCESS 2025-08-29 17:28:51.288605 | orchestrator | 2025-08-29 17:28:51.288717 | orchestrator | 2025-08-29 17:28:51.288734 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:28:51.288746 | orchestrator | 2025-08-29 17:28:51.288792 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:28:51.288805 | orchestrator | Friday 29 August 2025 17:22:12 +0000 (0:00:00.559) 0:00:00.559 ********* 2025-08-29 17:28:51.288817 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.288829 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.288840 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.288851 | orchestrator | 2025-08-29 17:28:51.288862 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:28:51.288873 | orchestrator | Friday 29 August 2025 17:22:13 +0000 (0:00:00.764) 0:00:01.323 ********* 2025-08-29 17:28:51.288884 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-08-29 17:28:51.288895 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-08-29 17:28:51.289064 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-08-29 17:28:51.289076 | orchestrator | 2025-08-29 17:28:51.289087 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-08-29 17:28:51.289097 | orchestrator | 2025-08-29 17:28:51.289108 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-08-29 17:28:51.289132 | orchestrator | Friday 29 August 2025 17:22:13 +0000 (0:00:00.479) 0:00:01.803 ********* 2025-08-29 17:28:51.289144 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.289155 | orchestrator | 2025-08-29 17:28:51.289166 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-08-29 17:28:51.289177 | orchestrator | Friday 29 August 2025 17:22:14 +0000 (0:00:00.714) 0:00:02.517 ********* 2025-08-29 17:28:51.289187 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.289198 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.289234 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.289246 | orchestrator | 2025-08-29 17:28:51.289281 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-08-29 17:28:51.289309 | orchestrator | Friday 29 August 2025 17:22:15 +0000 (0:00:00.855) 0:00:03.373 ********* 2025-08-29 17:28:51.289320 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.289370 | orchestrator | 2025-08-29 17:28:51.289381 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-08-29 17:28:51.289392 | orchestrator | Friday 29 August 2025 17:22:17 +0000 (0:00:01.935) 0:00:05.308 ********* 2025-08-29 17:28:51.289402 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.289413 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.289424 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.289435 | orchestrator | 2025-08-29 17:28:51.289446 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-08-29 17:28:51.289457 | orchestrator | Friday 29 August 2025 17:22:18 +0000 (0:00:00.819) 0:00:06.128 ********* 2025-08-29 17:28:51.289467 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 17:28:51.289479 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 17:28:51.289489 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 17:28:51.289522 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 17:28:51.289534 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 17:28:51.289544 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 17:28:51.289567 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 17:28:51.289579 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 17:28:51.289590 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 17:28:51.289601 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 17:28:51.289611 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 17:28:51.289635 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 17:28:51.289646 | orchestrator | 2025-08-29 17:28:51.289657 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 17:28:51.289677 | orchestrator | Friday 29 August 2025 17:22:22 +0000 (0:00:04.069) 0:00:10.198 ********* 2025-08-29 17:28:51.289689 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-08-29 17:28:51.289717 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-08-29 17:28:51.289728 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-08-29 17:28:51.289739 | orchestrator | 2025-08-29 17:28:51.289749 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 17:28:51.289760 | orchestrator | Friday 29 August 2025 17:22:23 +0000 (0:00:01.666) 0:00:11.864 ********* 2025-08-29 17:28:51.289786 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-08-29 17:28:51.289797 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-08-29 17:28:51.289819 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-08-29 17:28:51.289830 | orchestrator | 2025-08-29 17:28:51.289887 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 17:28:51.289899 | orchestrator | Friday 29 August 2025 17:22:25 +0000 (0:00:01.699) 0:00:13.563 ********* 2025-08-29 17:28:51.289910 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-08-29 17:28:51.289921 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.289947 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-08-29 17:28:51.289958 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.289969 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-08-29 17:28:51.289980 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.289991 | orchestrator | 2025-08-29 17:28:51.290001 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-08-29 17:28:51.290013 | orchestrator | Friday 29 August 2025 17:22:26 +0000 (0:00:01.113) 0:00:14.676 ********* 2025-08-29 17:28:51.290079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 17:28:51.290105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 17:28:51.290128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 17:28:51.290139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:28:51.290151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:28:51.290171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:28:51.290184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:28:51.290231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:28:51.290256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:28:51.290268 | orchestrator | 2025-08-29 17:28:51.290279 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-08-29 17:28:51.290290 | orchestrator | Friday 29 August 2025 17:22:28 +0000 (0:00:02.322) 0:00:16.999 ********* 2025-08-29 17:28:51.290301 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.290312 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.290323 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.290354 | orchestrator | 2025-08-29 17:28:51.290365 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-08-29 17:28:51.290376 | orchestrator | Friday 29 August 2025 17:22:30 +0000 (0:00:01.358) 0:00:18.357 ********* 2025-08-29 17:28:51.290387 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-08-29 17:28:51.290397 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-08-29 17:28:51.290408 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-08-29 17:28:51.290419 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-08-29 17:28:51.290429 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-08-29 17:28:51.290440 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-08-29 17:28:51.290450 | orchestrator | 2025-08-29 17:28:51.290461 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-08-29 17:28:51.290472 | orchestrator | Friday 29 August 2025 17:22:32 +0000 (0:00:02.008) 0:00:20.365 ********* 2025-08-29 17:28:51.290483 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.290493 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.290504 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.290514 | orchestrator | 2025-08-29 17:28:51.290525 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-08-29 17:28:51.290536 | orchestrator | Friday 29 August 2025 17:22:34 +0000 (0:00:02.016) 0:00:22.382 ********* 2025-08-29 17:28:51.290547 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.290558 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.290568 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.290579 | orchestrator | 2025-08-29 17:28:51.290590 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-08-29 17:28:51.290601 | orchestrator | Friday 29 August 2025 17:22:36 +0000 (0:00:02.259) 0:00:24.642 ********* 2025-08-29 17:28:51.290612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.290647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.290675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.290692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__757e952e1e881a642235c6198fa9b2a4820cf0dc', '__omit_place_holder__757e952e1e881a642235c6198fa9b2a4820cf0dc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 17:28:51.290704 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.290716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.290728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.290739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.290751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__757e952e1e881a642235c6198fa9b2a4820cf0dc', '__omit_place_holder__757e952e1e881a642235c6198fa9b2a4820cf0dc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 17:28:51.290762 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.290804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.290817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.290834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.290845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__757e952e1e881a642235c6198fa9b2a4820cf0dc', '__omit_place_holder__757e952e1e881a642235c6198fa9b2a4820cf0dc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 17:28:51.290857 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.290868 | orchestrator | 2025-08-29 17:28:51.290879 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-08-29 17:28:51.290890 | orchestrator | Friday 29 August 2025 17:22:37 +0000 (0:00:01.030) 0:00:25.672 ********* 2025-08-29 17:28:51.290940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 17:28:51.290954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 17:28:51.290995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 17:28:51.291008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:28:51.291054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.291067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__757e952e1e881a642235c6198fa9b2a4820cf0dc', '__omit_place_holder__757e952e1e881a642235c6198fa9b2a4820cf0dc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 17:28:51.291078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:28:51.291109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.291121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__757e952e1e881a642235c6198fa9b2a4820cf0dc', '__omit_place_holder__757e952e1e881a642235c6198fa9b2a4820cf0dc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 17:28:51.291147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:28:51.291159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.291176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__757e952e1e881a642235c6198fa9b2a4820cf0dc', '__omit_place_holder__757e952e1e881a642235c6198fa9b2a4820cf0dc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 17:28:51.291218 | orchestrator | 2025-08-29 17:28:51.291230 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-08-29 17:28:51.291241 | orchestrator | Friday 29 August 2025 17:22:41 +0000 (0:00:03.942) 0:00:29.614 ********* 2025-08-29 17:28:51.291253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 17:28:51.291264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 17:28:51.291275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 17:28:51.291445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:28:51.291464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:28:51.291482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:28:51.291494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:28:51.291505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:28:51.291516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:28:51.291538 | orchestrator | 2025-08-29 17:28:51.291550 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-08-29 17:28:51.291561 | orchestrator | Friday 29 August 2025 17:22:45 +0000 (0:00:03.989) 0:00:33.604 ********* 2025-08-29 17:28:51.291572 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 17:28:51.291583 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 17:28:51.291594 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 17:28:51.291605 | orchestrator | 2025-08-29 17:28:51.291616 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-08-29 17:28:51.291627 | orchestrator | Friday 29 August 2025 17:22:47 +0000 (0:00:02.388) 0:00:35.993 ********* 2025-08-29 17:28:51.291637 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 17:28:51.291648 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 17:28:51.291659 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 17:28:51.291670 | orchestrator | 2025-08-29 17:28:51.291707 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-08-29 17:28:51.291719 | orchestrator | Friday 29 August 2025 17:22:53 +0000 (0:00:05.890) 0:00:41.883 ********* 2025-08-29 17:28:51.291730 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.291741 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.291752 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.291763 | orchestrator | 2025-08-29 17:28:51.291774 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-08-29 17:28:51.291784 | orchestrator | Friday 29 August 2025 17:22:54 +0000 (0:00:00.734) 0:00:42.618 ********* 2025-08-29 17:28:51.291795 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 17:28:51.291806 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 17:28:51.291817 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 17:28:51.291828 | orchestrator | 2025-08-29 17:28:51.291839 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-08-29 17:28:51.291850 | orchestrator | Friday 29 August 2025 17:22:58 +0000 (0:00:03.451) 0:00:46.069 ********* 2025-08-29 17:28:51.291861 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 17:28:51.291872 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 17:28:51.291888 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 17:28:51.291899 | orchestrator | 2025-08-29 17:28:51.291922 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-08-29 17:28:51.291934 | orchestrator | Friday 29 August 2025 17:23:01 +0000 (0:00:03.925) 0:00:49.995 ********* 2025-08-29 17:28:51.291944 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-08-29 17:28:51.291955 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-08-29 17:28:51.291966 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-08-29 17:28:51.291977 | orchestrator | 2025-08-29 17:28:51.291988 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-08-29 17:28:51.292015 | orchestrator | Friday 29 August 2025 17:23:03 +0000 (0:00:01.977) 0:00:51.972 ********* 2025-08-29 17:28:51.292048 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-08-29 17:28:51.292069 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-08-29 17:28:51.292096 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-08-29 17:28:51.292107 | orchestrator | 2025-08-29 17:28:51.292118 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-08-29 17:28:51.292129 | orchestrator | Friday 29 August 2025 17:23:06 +0000 (0:00:02.197) 0:00:54.170 ********* 2025-08-29 17:28:51.292140 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.292172 | orchestrator | 2025-08-29 17:28:51.292183 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-08-29 17:28:51.292194 | orchestrator | Friday 29 August 2025 17:23:06 +0000 (0:00:00.796) 0:00:54.967 ********* 2025-08-29 17:28:51.292205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 17:28:51.292217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 17:28:51.292235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 17:28:51.292247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:28:51.292263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:28:51.292282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:28:51.292294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:28:51.292306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:28:51.292317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:28:51.292349 | orchestrator | 2025-08-29 17:28:51.292361 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-08-29 17:28:51.292371 | orchestrator | Friday 29 August 2025 17:23:10 +0000 (0:00:04.041) 0:00:59.008 ********* 2025-08-29 17:28:51.292390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.292402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.292418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.292436 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.292447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.292459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.292470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.292482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.292493 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.292511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.292523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.292540 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.292551 | orchestrator | 2025-08-29 17:28:51.292562 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-08-29 17:28:51.292574 | orchestrator | Friday 29 August 2025 17:23:11 +0000 (0:00:00.595) 0:00:59.604 ********* 2025-08-29 17:28:51.292585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.292596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.292636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.292648 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.292679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.292697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.292709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.292728 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.292745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.292768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.292780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.292792 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.292803 | orchestrator | 2025-08-29 17:28:51.292814 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-08-29 17:28:51.292825 | orchestrator | Friday 29 August 2025 17:23:12 +0000 (0:00:01.053) 0:01:00.658 ********* 2025-08-29 17:28:51.292836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.292853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.292865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.292883 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.292894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.292911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.292923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.292946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.292958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.292969 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.292985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.293009 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.293020 | orchestrator | 2025-08-29 17:28:51.293031 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-08-29 17:28:51.293042 | orchestrator | Friday 29 August 2025 17:23:13 +0000 (0:00:00.967) 0:01:01.625 ********* 2025-08-29 17:28:51.293053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.293069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.293081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.293093 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.293104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.293115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.293127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.293164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.293367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.293382 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.293399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.293410 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.293421 | orchestrator | 2025-08-29 17:28:51.293432 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-08-29 17:28:51.293443 | orchestrator | Friday 29 August 2025 17:23:14 +0000 (0:00:00.960) 0:01:02.586 ********* 2025-08-29 17:28:51.293455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.293466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.293478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.293497 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.293514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.293526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.293538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.293553 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.293565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.293576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.293588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.293599 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.293610 | orchestrator | 2025-08-29 17:28:51.293621 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-08-29 17:28:51.293639 | orchestrator | Friday 29 August 2025 17:23:16 +0000 (0:00:01.701) 0:01:04.287 ********* 2025-08-29 17:28:51.293650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.293669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.293698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.293710 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.293726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.293738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.293750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.293761 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.293773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.293811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.293825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.293836 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.293847 | orchestrator | 2025-08-29 17:28:51.293858 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-08-29 17:28:51.293869 | orchestrator | Friday 29 August 2025 17:23:17 +0000 (0:00:01.425) 0:01:05.712 ********* 2025-08-29 17:28:51.293885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.293907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.293919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.293931 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.293959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.293971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.293990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.294066 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.294082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.294099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.294111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.294122 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.294133 | orchestrator | 2025-08-29 17:28:51.294144 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-08-29 17:28:51.294155 | orchestrator | Friday 29 August 2025 17:23:18 +0000 (0:00:00.622) 0:01:06.335 ********* 2025-08-29 17:28:51.294174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.294186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.294197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.294209 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.294227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.294239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.294255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.294266 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.294278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:28:51.294295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:28:51.294307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:28:51.294318 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.294490 | orchestrator | 2025-08-29 17:28:51.294503 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-08-29 17:28:51.294514 | orchestrator | Friday 29 August 2025 17:23:19 +0000 (0:00:00.865) 0:01:07.201 ********* 2025-08-29 17:28:51.294525 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 17:28:51.294537 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 17:28:51.294554 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 17:28:51.294566 | orchestrator | 2025-08-29 17:28:51.294576 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-08-29 17:28:51.294587 | orchestrator | Friday 29 August 2025 17:23:21 +0000 (0:00:01.919) 0:01:09.120 ********* 2025-08-29 17:28:51.294598 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 17:28:51.294609 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 17:28:51.294620 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 17:28:51.294631 | orchestrator | 2025-08-29 17:28:51.294642 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-08-29 17:28:51.294653 | orchestrator | Friday 29 August 2025 17:23:22 +0000 (0:00:01.393) 0:01:10.514 ********* 2025-08-29 17:28:51.294663 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 17:28:51.294674 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 17:28:51.294685 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 17:28:51.294695 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 17:28:51.294706 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.294717 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 17:28:51.294728 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.294755 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 17:28:51.294766 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.294777 | orchestrator | 2025-08-29 17:28:51.294788 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-08-29 17:28:51.294799 | orchestrator | Friday 29 August 2025 17:23:23 +0000 (0:00:00.813) 0:01:11.328 ********* 2025-08-29 17:28:51.294810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 17:28:51.294822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 17:28:51.294834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 17:28:51.294852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:28:51.294864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:28:51.294879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:28:51.294897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:28:51.294909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:28:51.294935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:28:51.294947 | orchestrator | 2025-08-29 17:28:51.294958 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-08-29 17:28:51.294969 | orchestrator | Friday 29 August 2025 17:23:25 +0000 (0:00:02.552) 0:01:13.881 ********* 2025-08-29 17:28:51.294980 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.294990 | orchestrator | 2025-08-29 17:28:51.295026 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-08-29 17:28:51.295037 | orchestrator | Friday 29 August 2025 17:23:26 +0000 (0:00:00.906) 0:01:14.787 ********* 2025-08-29 17:28:51.295050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 17:28:51.295069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.295088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.295105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.295117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 17:28:51.295128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.295140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.295165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.295209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 17:28:51.295263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.295276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.295287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.295298 | orchestrator | 2025-08-29 17:28:51.295309 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-08-29 17:28:51.295320 | orchestrator | Friday 29 August 2025 17:23:31 +0000 (0:00:04.842) 0:01:19.629 ********* 2025-08-29 17:28:51.295392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 17:28:51.295491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.295526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.295551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.295569 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.295586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 17:28:51.295602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.295621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.295638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.295664 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.295694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 17:28:51.295718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.295730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.295740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.295750 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.295759 | orchestrator | 2025-08-29 17:28:51.295769 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-08-29 17:28:51.295804 | orchestrator | Friday 29 August 2025 17:23:32 +0000 (0:00:00.991) 0:01:20.621 ********* 2025-08-29 17:28:51.295815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 17:28:51.295826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 17:28:51.295836 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.295846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 17:28:51.295856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 17:28:51.295866 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.295925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 17:28:51.295936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 17:28:51.295946 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.295956 | orchestrator | 2025-08-29 17:28:51.295971 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-08-29 17:28:51.295981 | orchestrator | Friday 29 August 2025 17:23:33 +0000 (0:00:01.090) 0:01:21.712 ********* 2025-08-29 17:28:51.295991 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.296000 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.296010 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.296021 | orchestrator | 2025-08-29 17:28:51.296067 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-08-29 17:28:51.296081 | orchestrator | Friday 29 August 2025 17:23:34 +0000 (0:00:01.294) 0:01:23.007 ********* 2025-08-29 17:28:51.296122 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.296137 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.296172 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.296187 | orchestrator | 2025-08-29 17:28:51.296216 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-08-29 17:28:51.296242 | orchestrator | Friday 29 August 2025 17:23:36 +0000 (0:00:02.015) 0:01:25.022 ********* 2025-08-29 17:28:51.296253 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.296263 | orchestrator | 2025-08-29 17:28:51.296272 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-08-29 17:28:51.296282 | orchestrator | Friday 29 August 2025 17:23:37 +0000 (0:00:00.926) 0:01:25.949 ********* 2025-08-29 17:28:51.296299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.296310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.296321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.296391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.296411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.296422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.296441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.296452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.296476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.296558 | orchestrator | 2025-08-29 17:28:51.296577 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-08-29 17:28:51.296592 | orchestrator | Friday 29 August 2025 17:23:42 +0000 (0:00:04.383) 0:01:30.332 ********* 2025-08-29 17:28:51.296618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.296636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.296662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.296679 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.296690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.296709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.296719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.296729 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.296746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.296757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.296767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.296777 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.296787 | orchestrator | 2025-08-29 17:28:51.296797 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-08-29 17:28:51.296807 | orchestrator | Friday 29 August 2025 17:23:43 +0000 (0:00:01.424) 0:01:31.756 ********* 2025-08-29 17:28:51.296817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 17:28:51.296834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 17:28:51.296845 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.296855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 17:28:51.296864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 17:28:51.296874 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.296884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 17:28:51.296893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 17:28:51.296903 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.296913 | orchestrator | 2025-08-29 17:28:51.296922 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-08-29 17:28:51.296932 | orchestrator | Friday 29 August 2025 17:23:44 +0000 (0:00:01.162) 0:01:32.919 ********* 2025-08-29 17:28:51.296941 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.296951 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.296960 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.296970 | orchestrator | 2025-08-29 17:28:51.296998 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-08-29 17:28:51.297009 | orchestrator | Friday 29 August 2025 17:23:46 +0000 (0:00:01.294) 0:01:34.214 ********* 2025-08-29 17:28:51.297018 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.297028 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.297037 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.297047 | orchestrator | 2025-08-29 17:28:51.297062 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-08-29 17:28:51.297072 | orchestrator | Friday 29 August 2025 17:23:48 +0000 (0:00:02.012) 0:01:36.227 ********* 2025-08-29 17:28:51.297081 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.297091 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.297100 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.297110 | orchestrator | 2025-08-29 17:28:51.297120 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-08-29 17:28:51.297130 | orchestrator | Friday 29 August 2025 17:23:48 +0000 (0:00:00.321) 0:01:36.549 ********* 2025-08-29 17:28:51.297140 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.297149 | orchestrator | 2025-08-29 17:28:51.297159 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-08-29 17:28:51.297169 | orchestrator | Friday 29 August 2025 17:23:49 +0000 (0:00:00.874) 0:01:37.423 ********* 2025-08-29 17:28:51.297184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 17:28:51.297202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 17:28:51.297212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 17:28:51.297223 | orchestrator | 2025-08-29 17:28:51.297232 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-08-29 17:28:51.297242 | orchestrator | Friday 29 August 2025 17:23:52 +0000 (0:00:02.684) 0:01:40.108 ********* 2025-08-29 17:28:51.297258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 17:28:51.297268 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.297278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 17:28:51.297295 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.297310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 17:28:51.297321 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.297362 | orchestrator | 2025-08-29 17:28:51.297373 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-08-29 17:28:51.297382 | orchestrator | Friday 29 August 2025 17:23:53 +0000 (0:00:01.640) 0:01:41.748 ********* 2025-08-29 17:28:51.297393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 17:28:51.297405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 17:28:51.297415 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.297425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 17:28:51.297436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 17:28:51.297445 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.297461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 17:28:51.297472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 17:28:51.297488 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.297498 | orchestrator | 2025-08-29 17:28:51.297507 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-08-29 17:28:51.297517 | orchestrator | Friday 29 August 2025 17:23:55 +0000 (0:00:02.132) 0:01:43.880 ********* 2025-08-29 17:28:51.297527 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.297536 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.297545 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.297555 | orchestrator | 2025-08-29 17:28:51.297564 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-08-29 17:28:51.297574 | orchestrator | Friday 29 August 2025 17:23:56 +0000 (0:00:00.796) 0:01:44.677 ********* 2025-08-29 17:28:51.297584 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.297593 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.297604 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.297621 | orchestrator | 2025-08-29 17:28:51.297638 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-08-29 17:28:51.297653 | orchestrator | Friday 29 August 2025 17:23:57 +0000 (0:00:01.237) 0:01:45.914 ********* 2025-08-29 17:28:51.297676 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.297693 | orchestrator | 2025-08-29 17:28:51.297708 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-08-29 17:28:51.297724 | orchestrator | Friday 29 August 2025 17:23:58 +0000 (0:00:00.766) 0:01:46.680 ********* 2025-08-29 17:28:51.297741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.297759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.297778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.297805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.297840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.297859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.297876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.297894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.297920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.297946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.297965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.297976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.297986 | orchestrator | 2025-08-29 17:28:51.297996 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-08-29 17:28:51.298005 | orchestrator | Friday 29 August 2025 17:24:03 +0000 (0:00:04.842) 0:01:51.522 ********* 2025-08-29 17:28:51.298046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.298059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298110 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.298120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.298131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.298142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298209 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.298220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298235 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.298245 | orchestrator | 2025-08-29 17:28:51.298255 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-08-29 17:28:51.298265 | orchestrator | Friday 29 August 2025 17:24:04 +0000 (0:00:01.196) 0:01:52.718 ********* 2025-08-29 17:28:51.298275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 17:28:51.298290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 17:28:51.298300 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.298310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 17:28:51.298320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 17:28:51.298349 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.298359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 17:28:51.298369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 17:28:51.298378 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.298388 | orchestrator | 2025-08-29 17:28:51.298398 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-08-29 17:28:51.298407 | orchestrator | Friday 29 August 2025 17:24:05 +0000 (0:00:01.059) 0:01:53.777 ********* 2025-08-29 17:28:51.298421 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.298431 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.298441 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.298450 | orchestrator | 2025-08-29 17:28:51.298460 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-08-29 17:28:51.298469 | orchestrator | Friday 29 August 2025 17:24:07 +0000 (0:00:01.393) 0:01:55.171 ********* 2025-08-29 17:28:51.298479 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.298488 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.298498 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.298507 | orchestrator | 2025-08-29 17:28:51.298516 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-08-29 17:28:51.298526 | orchestrator | Friday 29 August 2025 17:24:09 +0000 (0:00:02.313) 0:01:57.485 ********* 2025-08-29 17:28:51.298536 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.298545 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.298555 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.298564 | orchestrator | 2025-08-29 17:28:51.298574 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-08-29 17:28:51.298583 | orchestrator | Friday 29 August 2025 17:24:10 +0000 (0:00:00.642) 0:01:58.127 ********* 2025-08-29 17:28:51.298593 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.298602 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.298618 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.298628 | orchestrator | 2025-08-29 17:28:51.298637 | orchestrator | TASK [include_role : designate] ************************************************ 2025-08-29 17:28:51.298647 | orchestrator | Friday 29 August 2025 17:24:10 +0000 (0:00:00.357) 0:01:58.485 ********* 2025-08-29 17:28:51.298656 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.298666 | orchestrator | 2025-08-29 17:28:51.298675 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-08-29 17:28:51.298685 | orchestrator | Friday 29 August 2025 17:24:11 +0000 (0:00:00.804) 0:01:59.290 ********* 2025-08-29 17:28:51.298695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:28:51.298710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:28:51.298721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:28:51.298799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:28:51.298810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:28:51.298886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:28:51.298901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.298957 | orchestrator | 2025-08-29 17:28:51.298967 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-08-29 17:28:51.298977 | orchestrator | Friday 29 August 2025 17:24:16 +0000 (0:00:04.786) 0:02:04.076 ********* 2025-08-29 17:28:51.298993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:28:51.299009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:28:51.299029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.299040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.299050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.299060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.299076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.299086 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.299096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:28:51.299116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:28:51.299127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.299137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.299147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:28:51.299163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.299173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:28:51.299194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.299204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.299214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.299225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.299235 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.299245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.299261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.299271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.299287 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.299297 | orchestrator | 2025-08-29 17:28:51.299307 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-08-29 17:28:51.299317 | orchestrator | Friday 29 August 2025 17:24:16 +0000 (0:00:00.855) 0:02:04.932 ********* 2025-08-29 17:28:51.299380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 17:28:51.299393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 17:28:51.299403 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.299413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 17:28:51.299423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 17:28:51.299432 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.299442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 17:28:51.299452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 17:28:51.299462 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.299472 | orchestrator | 2025-08-29 17:28:51.299481 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-08-29 17:28:51.299491 | orchestrator | Friday 29 August 2025 17:24:17 +0000 (0:00:01.008) 0:02:05.940 ********* 2025-08-29 17:28:51.299501 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.299510 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.299520 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.299529 | orchestrator | 2025-08-29 17:28:51.299539 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-08-29 17:28:51.299549 | orchestrator | Friday 29 August 2025 17:24:19 +0000 (0:00:01.760) 0:02:07.701 ********* 2025-08-29 17:28:51.299558 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.299568 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.299577 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.299586 | orchestrator | 2025-08-29 17:28:51.299596 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-08-29 17:28:51.299606 | orchestrator | Friday 29 August 2025 17:24:21 +0000 (0:00:01.886) 0:02:09.588 ********* 2025-08-29 17:28:51.299615 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.299624 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.299632 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.299639 | orchestrator | 2025-08-29 17:28:51.299647 | orchestrator | TASK [include_role : glance] *************************************************** 2025-08-29 17:28:51.299655 | orchestrator | Friday 29 August 2025 17:24:22 +0000 (0:00:00.544) 0:02:10.133 ********* 2025-08-29 17:28:51.299663 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.299671 | orchestrator | 2025-08-29 17:28:51.299684 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-08-29 17:28:51.299692 | orchestrator | Friday 29 August 2025 17:24:22 +0000 (0:00:00.818) 0:02:10.951 ********* 2025-08-29 17:28:51.299722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:28:51.299754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 17:28:51.299771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:28:51.299790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 17:28:51.299805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:28:51.299824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 17:28:51.299833 | orchestrator | 2025-08-29 17:28:51.299842 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-08-29 17:28:51.299850 | orchestrator | Friday 29 August 2025 17:24:27 +0000 (0:00:04.313) 0:02:15.265 ********* 2025-08-29 17:28:51.299863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 17:28:51.299882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 17:28:51.299891 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.299900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 17:28:51.299926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 17:28:51.299936 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.299945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 17:28:51.299964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 17:28:51.299974 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.299982 | orchestrator | 2025-08-29 17:28:51.299990 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-08-29 17:28:51.299998 | orchestrator | Friday 29 August 2025 17:24:30 +0000 (0:00:03.501) 0:02:18.767 ********* 2025-08-29 17:28:51.300010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 17:28:51.300019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 17:28:51.300051 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.300060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 17:28:51.300068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 17:28:51.300082 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.300090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 17:28:51.300104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 17:28:51.300112 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.300120 | orchestrator | 2025-08-29 17:28:51.300128 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-08-29 17:28:51.300136 | orchestrator | Friday 29 August 2025 17:24:33 +0000 (0:00:03.246) 0:02:22.013 ********* 2025-08-29 17:28:51.300144 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.300152 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.300159 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.300167 | orchestrator | 2025-08-29 17:28:51.300175 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-08-29 17:28:51.300183 | orchestrator | Friday 29 August 2025 17:24:35 +0000 (0:00:01.308) 0:02:23.322 ********* 2025-08-29 17:28:51.300190 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.300198 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.300206 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.300214 | orchestrator | 2025-08-29 17:28:51.300221 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-08-29 17:28:51.300229 | orchestrator | Friday 29 August 2025 17:24:37 +0000 (0:00:02.142) 0:02:25.465 ********* 2025-08-29 17:28:51.300237 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.300245 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.300252 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.300260 | orchestrator | 2025-08-29 17:28:51.300268 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-08-29 17:28:51.300276 | orchestrator | Friday 29 August 2025 17:24:37 +0000 (0:00:00.546) 0:02:26.012 ********* 2025-08-29 17:28:51.300284 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.300291 | orchestrator | 2025-08-29 17:28:51.300299 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-08-29 17:28:51.300307 | orchestrator | Friday 29 August 2025 17:24:38 +0000 (0:00:00.853) 0:02:26.866 ********* 2025-08-29 17:28:51.300343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:28:51.300358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:28:51.300367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:28:51.300375 | orchestrator | 2025-08-29 17:28:51.300383 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-08-29 17:28:51.300391 | orchestrator | Friday 29 August 2025 17:24:42 +0000 (0:00:03.602) 0:02:30.468 ********* 2025-08-29 17:28:51.300404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 17:28:51.300412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 17:28:51.300421 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.300428 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.300440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 17:28:51.300453 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.300461 | orchestrator | 2025-08-29 17:28:51.300469 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-08-29 17:28:51.300477 | orchestrator | Friday 29 August 2025 17:24:43 +0000 (0:00:00.708) 0:02:31.177 ********* 2025-08-29 17:28:51.300485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 17:28:51.300493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 17:28:51.300501 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.300509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 17:28:51.300517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 17:28:51.300525 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.300532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 17:28:51.300540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 17:28:51.300548 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.300556 | orchestrator | 2025-08-29 17:28:51.300564 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-08-29 17:28:51.300572 | orchestrator | Friday 29 August 2025 17:24:43 +0000 (0:00:00.729) 0:02:31.907 ********* 2025-08-29 17:28:51.300580 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.300587 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.300595 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.300603 | orchestrator | 2025-08-29 17:28:51.300611 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-08-29 17:28:51.300618 | orchestrator | Friday 29 August 2025 17:24:45 +0000 (0:00:01.263) 0:02:33.171 ********* 2025-08-29 17:28:51.300626 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.300634 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.300642 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.300649 | orchestrator | 2025-08-29 17:28:51.300657 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-08-29 17:28:51.300665 | orchestrator | Friday 29 August 2025 17:24:47 +0000 (0:00:02.204) 0:02:35.376 ********* 2025-08-29 17:28:51.300673 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.300681 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.300692 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.300700 | orchestrator | 2025-08-29 17:28:51.300708 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-08-29 17:28:51.300716 | orchestrator | Friday 29 August 2025 17:24:47 +0000 (0:00:00.570) 0:02:35.946 ********* 2025-08-29 17:28:51.300724 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.300732 | orchestrator | 2025-08-29 17:28:51.300740 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-08-29 17:28:51.300747 | orchestrator | Friday 29 August 2025 17:24:48 +0000 (0:00:00.921) 0:02:36.868 ********* 2025-08-29 17:28:51.300761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 17:28:51.300781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 17:28:51.300800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 17:28:51.300809 | orchestrator | 2025-08-29 17:28:51.300817 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-08-29 17:28:51.300825 | orchestrator | Friday 29 August 2025 17:24:53 +0000 (0:00:05.114) 0:02:41.982 ********* 2025-08-29 17:28:51.300839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 17:28:51.300853 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.300866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 17:28:51.300875 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.300895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 17:28:51.300910 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.300918 | orchestrator | 2025-08-29 17:28:51.300926 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-08-29 17:28:51.300934 | orchestrator | Friday 29 August 2025 17:24:55 +0000 (0:00:01.222) 0:02:43.204 ********* 2025-08-29 17:28:51.300942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 17:28:51.300950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 17:28:51.300959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 17:28:51.300967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 17:28:51.300975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 17:28:51.300983 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.300991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 17:28:51.301000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 17:28:51.301008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 17:28:51.301025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 17:28:51.301034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 17:28:51.301042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 17:28:51.301054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 17:28:51.301062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 17:28:51.301070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 17:28:51.301078 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.301086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 17:28:51.301094 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.301102 | orchestrator | 2025-08-29 17:28:51.301110 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-08-29 17:28:51.301118 | orchestrator | Friday 29 August 2025 17:24:56 +0000 (0:00:01.185) 0:02:44.389 ********* 2025-08-29 17:28:51.301126 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.301134 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.301141 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.301149 | orchestrator | 2025-08-29 17:28:51.301157 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-08-29 17:28:51.301165 | orchestrator | Friday 29 August 2025 17:24:57 +0000 (0:00:01.327) 0:02:45.717 ********* 2025-08-29 17:28:51.301173 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.301180 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.301188 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.301196 | orchestrator | 2025-08-29 17:28:51.301204 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-08-29 17:28:51.301212 | orchestrator | Friday 29 August 2025 17:25:00 +0000 (0:00:02.492) 0:02:48.210 ********* 2025-08-29 17:28:51.301219 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.301227 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.301235 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.301243 | orchestrator | 2025-08-29 17:28:51.301250 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-08-29 17:28:51.301258 | orchestrator | Friday 29 August 2025 17:25:00 +0000 (0:00:00.345) 0:02:48.555 ********* 2025-08-29 17:28:51.301266 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.301279 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.301287 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.301294 | orchestrator | 2025-08-29 17:28:51.301302 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-08-29 17:28:51.301310 | orchestrator | Friday 29 August 2025 17:25:01 +0000 (0:00:00.536) 0:02:49.092 ********* 2025-08-29 17:28:51.301318 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.301339 | orchestrator | 2025-08-29 17:28:51.301347 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-08-29 17:28:51.301355 | orchestrator | Friday 29 August 2025 17:25:02 +0000 (0:00:01.017) 0:02:50.110 ********* 2025-08-29 17:28:51.301368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:28:51.301378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:28:51.301391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:28:51.301400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:28:51.301415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:28:51.301423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:28:51.301437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:28:51.301450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:28:51.301458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:28:51.301467 | orchestrator | 2025-08-29 17:28:51.301474 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-08-29 17:28:51.301482 | orchestrator | Friday 29 August 2025 17:25:06 +0000 (0:00:04.248) 0:02:54.359 ********* 2025-08-29 17:28:51.301491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 17:28:51.301504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:28:51.301517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:28:51.301526 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.301538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 17:28:51.301547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:28:51.301555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:28:51.301572 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.301581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 17:28:51.301736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:28:51.301751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:28:51.301759 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.301767 | orchestrator | 2025-08-29 17:28:51.301775 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-08-29 17:28:51.301783 | orchestrator | Friday 29 August 2025 17:25:07 +0000 (0:00:01.254) 0:02:55.613 ********* 2025-08-29 17:28:51.301796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 17:28:51.301805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 17:28:51.301813 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.301821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 17:28:51.301836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 17:28:51.301845 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.301853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 17:28:51.301861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 17:28:51.301869 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.301877 | orchestrator | 2025-08-29 17:28:51.301885 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-08-29 17:28:51.301893 | orchestrator | Friday 29 August 2025 17:25:08 +0000 (0:00:00.852) 0:02:56.465 ********* 2025-08-29 17:28:51.301900 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.301908 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.301916 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.301924 | orchestrator | 2025-08-29 17:28:51.301932 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-08-29 17:28:51.301940 | orchestrator | Friday 29 August 2025 17:25:09 +0000 (0:00:01.376) 0:02:57.842 ********* 2025-08-29 17:28:51.301947 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.301955 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.301963 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.301971 | orchestrator | 2025-08-29 17:28:51.301979 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-08-29 17:28:51.301986 | orchestrator | Friday 29 August 2025 17:25:11 +0000 (0:00:02.176) 0:03:00.019 ********* 2025-08-29 17:28:51.301994 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.302002 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.302010 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.302051 | orchestrator | 2025-08-29 17:28:51.302061 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-08-29 17:28:51.302069 | orchestrator | Friday 29 August 2025 17:25:12 +0000 (0:00:00.583) 0:03:00.602 ********* 2025-08-29 17:28:51.302077 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.302085 | orchestrator | 2025-08-29 17:28:51.302093 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-08-29 17:28:51.302101 | orchestrator | Friday 29 August 2025 17:25:13 +0000 (0:00:01.007) 0:03:01.610 ********* 2025-08-29 17:28:51.302115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:28:51.302129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:28:51.302153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:28:51.302175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302183 | orchestrator | 2025-08-29 17:28:51.302191 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-08-29 17:28:51.302207 | orchestrator | Friday 29 August 2025 17:25:17 +0000 (0:00:03.527) 0:03:05.138 ********* 2025-08-29 17:28:51.302219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 17:28:51.302227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302235 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.302244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 17:28:51.302256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302264 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.302272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 17:28:51.302289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302298 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.302306 | orchestrator | 2025-08-29 17:28:51.302314 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-08-29 17:28:51.302322 | orchestrator | Friday 29 August 2025 17:25:18 +0000 (0:00:01.213) 0:03:06.352 ********* 2025-08-29 17:28:51.302377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 17:28:51.302386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 17:28:51.302396 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.302405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 17:28:51.302414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 17:28:51.302423 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.302432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 17:28:51.302441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 17:28:51.302449 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.302458 | orchestrator | 2025-08-29 17:28:51.302467 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-08-29 17:28:51.302476 | orchestrator | Friday 29 August 2025 17:25:19 +0000 (0:00:00.943) 0:03:07.295 ********* 2025-08-29 17:28:51.302485 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.302494 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.302503 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.302512 | orchestrator | 2025-08-29 17:28:51.302520 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-08-29 17:28:51.302528 | orchestrator | Friday 29 August 2025 17:25:20 +0000 (0:00:01.374) 0:03:08.670 ********* 2025-08-29 17:28:51.302536 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.302544 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.302551 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.302559 | orchestrator | 2025-08-29 17:28:51.302567 | orchestrator | TASK [include_role : manila] *************************************************** 2025-08-29 17:28:51.302581 | orchestrator | Friday 29 August 2025 17:25:22 +0000 (0:00:02.236) 0:03:10.907 ********* 2025-08-29 17:28:51.302593 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.302601 | orchestrator | 2025-08-29 17:28:51.302609 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-08-29 17:28:51.302617 | orchestrator | Friday 29 August 2025 17:25:24 +0000 (0:00:01.343) 0:03:12.250 ********* 2025-08-29 17:28:51.302625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 17:28:51.302637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 17:28:51.302680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 17:28:51.302715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302743 | orchestrator | 2025-08-29 17:28:51.302750 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-08-29 17:28:51.302757 | orchestrator | Friday 29 August 2025 17:25:28 +0000 (0:00:04.138) 0:03:16.389 ********* 2025-08-29 17:28:51.302764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 17:28:51.302774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302795 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.302803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 17:28:51.302818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302842 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.302850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 17:28:51.302856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.302886 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.302893 | orchestrator | 2025-08-29 17:28:51.302899 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-08-29 17:28:51.302906 | orchestrator | Friday 29 August 2025 17:25:29 +0000 (0:00:00.731) 0:03:17.121 ********* 2025-08-29 17:28:51.302913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 17:28:51.302920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 17:28:51.302926 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.302933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 17:28:51.302940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 17:28:51.302946 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.302956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 17:28:51.302963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 17:28:51.302970 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.302977 | orchestrator | 2025-08-29 17:28:51.302983 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-08-29 17:28:51.302990 | orchestrator | Friday 29 August 2025 17:25:30 +0000 (0:00:01.644) 0:03:18.765 ********* 2025-08-29 17:28:51.302996 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.303003 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.303010 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.303016 | orchestrator | 2025-08-29 17:28:51.303023 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-08-29 17:28:51.303029 | orchestrator | Friday 29 August 2025 17:25:31 +0000 (0:00:01.239) 0:03:20.004 ********* 2025-08-29 17:28:51.303036 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.303042 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.303053 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.303060 | orchestrator | 2025-08-29 17:28:51.303066 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-08-29 17:28:51.303073 | orchestrator | Friday 29 August 2025 17:25:34 +0000 (0:00:02.089) 0:03:22.093 ********* 2025-08-29 17:28:51.303080 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.303086 | orchestrator | 2025-08-29 17:28:51.303093 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-08-29 17:28:51.303099 | orchestrator | Friday 29 August 2025 17:25:35 +0000 (0:00:01.366) 0:03:23.460 ********* 2025-08-29 17:28:51.303106 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 17:28:51.303113 | orchestrator | 2025-08-29 17:28:51.303119 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-08-29 17:28:51.303126 | orchestrator | Friday 29 August 2025 17:25:37 +0000 (0:00:02.506) 0:03:25.967 ********* 2025-08-29 17:28:51.303137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:28:51.303145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 17:28:51.303152 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.303165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:28:51.303177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 17:28:51.303184 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.303199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:28:51.303207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 17:28:51.303218 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.303225 | orchestrator | 2025-08-29 17:28:51.303231 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-08-29 17:28:51.303238 | orchestrator | Friday 29 August 2025 17:25:40 +0000 (0:00:02.192) 0:03:28.160 ********* 2025-08-29 17:28:51.303245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:28:51.303256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 17:28:51.303263 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.303274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:28:51.303286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 17:28:51.303293 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.303305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:28:51.303315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 17:28:51.303340 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.303347 | orchestrator | 2025-08-29 17:28:51.303354 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-08-29 17:28:51.303360 | orchestrator | Friday 29 August 2025 17:25:42 +0000 (0:00:02.366) 0:03:30.526 ********* 2025-08-29 17:28:51.303367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 17:28:51.303374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 17:28:51.303381 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.303388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 17:28:51.303395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 17:28:51.303402 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.303413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 17:28:51.303420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 17:28:51.303434 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.303440 | orchestrator | 2025-08-29 17:28:51.303447 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-08-29 17:28:51.303454 | orchestrator | Friday 29 August 2025 17:25:45 +0000 (0:00:03.370) 0:03:33.897 ********* 2025-08-29 17:28:51.303464 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.303471 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.303477 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.303484 | orchestrator | 2025-08-29 17:28:51.303490 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-08-29 17:28:51.303497 | orchestrator | Friday 29 August 2025 17:25:47 +0000 (0:00:01.842) 0:03:35.739 ********* 2025-08-29 17:28:51.303504 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.303510 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.303517 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.303523 | orchestrator | 2025-08-29 17:28:51.303530 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-08-29 17:28:51.303537 | orchestrator | Friday 29 August 2025 17:25:49 +0000 (0:00:01.441) 0:03:37.181 ********* 2025-08-29 17:28:51.303543 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.303550 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.303557 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.303563 | orchestrator | 2025-08-29 17:28:51.303570 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-08-29 17:28:51.303576 | orchestrator | Friday 29 August 2025 17:25:49 +0000 (0:00:00.327) 0:03:37.508 ********* 2025-08-29 17:28:51.303583 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.303590 | orchestrator | 2025-08-29 17:28:51.303596 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-08-29 17:28:51.303603 | orchestrator | Friday 29 August 2025 17:25:50 +0000 (0:00:01.448) 0:03:38.957 ********* 2025-08-29 17:28:51.303610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 17:28:51.303618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 17:28:51.303629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 17:28:51.303641 | orchestrator | 2025-08-29 17:28:51.303647 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-08-29 17:28:51.303654 | orchestrator | Friday 29 August 2025 17:25:52 +0000 (0:00:01.494) 0:03:40.452 ********* 2025-08-29 17:28:51.303661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 17:28:51.303668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 17:28:51.303675 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.303682 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.303689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 17:28:51.303696 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.303703 | orchestrator | 2025-08-29 17:28:51.303709 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-08-29 17:28:51.303716 | orchestrator | Friday 29 August 2025 17:25:52 +0000 (0:00:00.417) 0:03:40.869 ********* 2025-08-29 17:28:51.303734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 17:28:51.303742 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.303749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 17:28:51.303760 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.303771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 17:28:51.303778 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.303784 | orchestrator | 2025-08-29 17:28:51.303791 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-08-29 17:28:51.303798 | orchestrator | Friday 29 August 2025 17:25:53 +0000 (0:00:00.880) 0:03:41.750 ********* 2025-08-29 17:28:51.303805 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.303811 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.303818 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.303824 | orchestrator | 2025-08-29 17:28:51.303831 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-08-29 17:28:51.303838 | orchestrator | Friday 29 August 2025 17:25:54 +0000 (0:00:00.473) 0:03:42.224 ********* 2025-08-29 17:28:51.303844 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.303851 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.303857 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.303864 | orchestrator | 2025-08-29 17:28:51.303871 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-08-29 17:28:51.303877 | orchestrator | Friday 29 August 2025 17:25:55 +0000 (0:00:01.467) 0:03:43.691 ********* 2025-08-29 17:28:51.303884 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.303890 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.303897 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.303904 | orchestrator | 2025-08-29 17:28:51.303910 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-08-29 17:28:51.303917 | orchestrator | Friday 29 August 2025 17:25:55 +0000 (0:00:00.333) 0:03:44.024 ********* 2025-08-29 17:28:51.303924 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.303930 | orchestrator | 2025-08-29 17:28:51.303937 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-08-29 17:28:51.303946 | orchestrator | Friday 29 August 2025 17:25:57 +0000 (0:00:01.558) 0:03:45.583 ********* 2025-08-29 17:28:51.303953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:28:51.303961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.303973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 17:28:51.304120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:28:51.304128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.304209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.304227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 17:28:51.304239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.304306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:28:51.304317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.304338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 17:28:51.304366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:28:51.304416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.304426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 17:28:51.304457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 17:28:51.304465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.304513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:28:51.304522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 17:28:51.304541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:28:51.304553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:28:51.304600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 17:28:51.304640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.304655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.304703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:28:51.304724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 17:28:51.304763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.304828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 17:28:51.304923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:28:51.304930 | orchestrator | 2025-08-29 17:28:51.304937 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-08-29 17:28:51.304944 | orchestrator | Friday 29 August 2025 17:26:02 +0000 (0:00:04.454) 0:03:50.038 ********* 2025-08-29 17:28:51.304967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:28:51.304986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.304993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 17:28:51.305070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.305090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.305097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:28:51.305145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:28:51.305165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 17:28:51.305193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.305240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 17:28:51.305281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 17:28:51.305288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:28:51.305361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:28:51.305392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305418 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.305426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.305433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.305493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 17:28:51.305536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:28:51.305543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.305609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 17:28:51.305621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.305631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.305639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:28:51.305678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 17:28:51.305691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:28:51.305708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 17:28:51.305715 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.305722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:28:51.305728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.305754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 17:28:51.305769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:28:51.305786 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.305793 | orchestrator | 2025-08-29 17:28:51.305800 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-08-29 17:28:51.305811 | orchestrator | Friday 29 August 2025 17:26:03 +0000 (0:00:01.609) 0:03:51.647 ********* 2025-08-29 17:28:51.305818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 17:28:51.305825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 17:28:51.305832 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.305839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 17:28:51.305845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 17:28:51.305852 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.305859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 17:28:51.305865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 17:28:51.305872 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.305879 | orchestrator | 2025-08-29 17:28:51.305885 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-08-29 17:28:51.305892 | orchestrator | Friday 29 August 2025 17:26:06 +0000 (0:00:02.512) 0:03:54.159 ********* 2025-08-29 17:28:51.305899 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.305906 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.305912 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.305919 | orchestrator | 2025-08-29 17:28:51.305926 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-08-29 17:28:51.305932 | orchestrator | Friday 29 August 2025 17:26:07 +0000 (0:00:01.264) 0:03:55.423 ********* 2025-08-29 17:28:51.305939 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.305946 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.305952 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.305959 | orchestrator | 2025-08-29 17:28:51.305965 | orchestrator | TASK [include_role : placement] ************************************************ 2025-08-29 17:28:51.305976 | orchestrator | Friday 29 August 2025 17:26:09 +0000 (0:00:02.060) 0:03:57.484 ********* 2025-08-29 17:28:51.305983 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.305990 | orchestrator | 2025-08-29 17:28:51.305996 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-08-29 17:28:51.306003 | orchestrator | Friday 29 August 2025 17:26:10 +0000 (0:00:01.243) 0:03:58.728 ********* 2025-08-29 17:28:51.306073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.306088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.306096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.306103 | orchestrator | 2025-08-29 17:28:51.306109 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-08-29 17:28:51.306117 | orchestrator | Friday 29 August 2025 17:26:14 +0000 (0:00:03.800) 0:04:02.528 ********* 2025-08-29 17:28:51.306123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.306135 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.306162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.306170 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.306177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.306184 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.306190 | orchestrator | 2025-08-29 17:28:51.306197 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-08-29 17:28:51.306204 | orchestrator | Friday 29 August 2025 17:26:15 +0000 (0:00:00.540) 0:04:03.069 ********* 2025-08-29 17:28:51.306214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306228 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.306236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306259 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.306266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306278 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.306286 | orchestrator | 2025-08-29 17:28:51.306293 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-08-29 17:28:51.306300 | orchestrator | Friday 29 August 2025 17:26:15 +0000 (0:00:00.804) 0:04:03.874 ********* 2025-08-29 17:28:51.306308 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.306315 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.306322 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.306346 | orchestrator | 2025-08-29 17:28:51.306354 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-08-29 17:28:51.306361 | orchestrator | Friday 29 August 2025 17:26:17 +0000 (0:00:01.291) 0:04:05.166 ********* 2025-08-29 17:28:51.306368 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.306375 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.306382 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.306390 | orchestrator | 2025-08-29 17:28:51.306397 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-08-29 17:28:51.306404 | orchestrator | Friday 29 August 2025 17:26:19 +0000 (0:00:02.194) 0:04:07.361 ********* 2025-08-29 17:28:51.306411 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.306419 | orchestrator | 2025-08-29 17:28:51.306426 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-08-29 17:28:51.306433 | orchestrator | Friday 29 August 2025 17:26:21 +0000 (0:00:01.757) 0:04:09.118 ********* 2025-08-29 17:28:51.306462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.306476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.306490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.306499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.306527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.306536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.306547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.306555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.306568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.306576 | orchestrator | 2025-08-29 17:28:51.306583 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-08-29 17:28:51.306591 | orchestrator | Friday 29 August 2025 17:26:25 +0000 (0:00:04.141) 0:04:13.260 ********* 2025-08-29 17:28:51.306616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.306625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.306632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.306639 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.306650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.306662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.306669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.306676 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.306701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.306713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.306725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.306732 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.306739 | orchestrator | 2025-08-29 17:28:51.306745 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-08-29 17:28:51.306752 | orchestrator | Friday 29 August 2025 17:26:26 +0000 (0:00:00.978) 0:04:14.239 ********* 2025-08-29 17:28:51.306759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306786 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.306793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306837 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.306845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 17:28:51.306879 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.306885 | orchestrator | 2025-08-29 17:28:51.306892 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-08-29 17:28:51.306899 | orchestrator | Friday 29 August 2025 17:26:27 +0000 (0:00:01.302) 0:04:15.541 ********* 2025-08-29 17:28:51.306910 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.306921 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.306932 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.306949 | orchestrator | 2025-08-29 17:28:51.306961 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-08-29 17:28:51.306977 | orchestrator | Friday 29 August 2025 17:26:28 +0000 (0:00:01.357) 0:04:16.899 ********* 2025-08-29 17:28:51.306987 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.306998 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.307008 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.307018 | orchestrator | 2025-08-29 17:28:51.307029 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-08-29 17:28:51.307041 | orchestrator | Friday 29 August 2025 17:26:30 +0000 (0:00:01.996) 0:04:18.896 ********* 2025-08-29 17:28:51.307051 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.307063 | orchestrator | 2025-08-29 17:28:51.307074 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-08-29 17:28:51.307085 | orchestrator | Friday 29 August 2025 17:26:32 +0000 (0:00:01.419) 0:04:20.315 ********* 2025-08-29 17:28:51.307096 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-08-29 17:28:51.307105 | orchestrator | 2025-08-29 17:28:51.307112 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-08-29 17:28:51.307118 | orchestrator | Friday 29 August 2025 17:26:33 +0000 (0:00:00.807) 0:04:21.122 ********* 2025-08-29 17:28:51.307125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 17:28:51.307132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 17:28:51.307140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 17:28:51.307146 | orchestrator | 2025-08-29 17:28:51.307153 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-08-29 17:28:51.307160 | orchestrator | Friday 29 August 2025 17:26:36 +0000 (0:00:03.822) 0:04:24.945 ********* 2025-08-29 17:28:51.307200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:28:51.307209 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.307216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:28:51.307223 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.307234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:28:51.307241 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.307248 | orchestrator | 2025-08-29 17:28:51.307254 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-08-29 17:28:51.307261 | orchestrator | Friday 29 August 2025 17:26:38 +0000 (0:00:01.464) 0:04:26.409 ********* 2025-08-29 17:28:51.307268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 17:28:51.307275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 17:28:51.307282 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.307289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 17:28:51.307296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 17:28:51.307303 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.307310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 17:28:51.307317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 17:28:51.307323 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.307373 | orchestrator | 2025-08-29 17:28:51.307380 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 17:28:51.307392 | orchestrator | Friday 29 August 2025 17:26:40 +0000 (0:00:01.637) 0:04:28.047 ********* 2025-08-29 17:28:51.307399 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.307405 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.307412 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.307418 | orchestrator | 2025-08-29 17:28:51.307425 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 17:28:51.307432 | orchestrator | Friday 29 August 2025 17:26:42 +0000 (0:00:02.562) 0:04:30.609 ********* 2025-08-29 17:28:51.307439 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.307445 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.307452 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.307458 | orchestrator | 2025-08-29 17:28:51.307465 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-08-29 17:28:51.307472 | orchestrator | Friday 29 August 2025 17:26:45 +0000 (0:00:03.107) 0:04:33.717 ********* 2025-08-29 17:28:51.307498 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-08-29 17:28:51.307507 | orchestrator | 2025-08-29 17:28:51.307513 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-08-29 17:28:51.307520 | orchestrator | Friday 29 August 2025 17:26:47 +0000 (0:00:01.469) 0:04:35.186 ********* 2025-08-29 17:28:51.307527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:28:51.307534 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.307541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:28:51.307552 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.307559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:28:51.307566 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.307573 | orchestrator | 2025-08-29 17:28:51.307579 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-08-29 17:28:51.307586 | orchestrator | Friday 29 August 2025 17:26:48 +0000 (0:00:01.295) 0:04:36.482 ********* 2025-08-29 17:28:51.307593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:28:51.307605 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.307612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:28:51.307618 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.307625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:28:51.307632 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.307639 | orchestrator | 2025-08-29 17:28:51.307645 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-08-29 17:28:51.307652 | orchestrator | Friday 29 August 2025 17:26:49 +0000 (0:00:01.484) 0:04:37.966 ********* 2025-08-29 17:28:51.307659 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.307665 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.307672 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.307678 | orchestrator | 2025-08-29 17:28:51.307702 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 17:28:51.307710 | orchestrator | Friday 29 August 2025 17:26:51 +0000 (0:00:01.820) 0:04:39.787 ********* 2025-08-29 17:28:51.307717 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.307724 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.307730 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.307737 | orchestrator | 2025-08-29 17:28:51.307743 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 17:28:51.307750 | orchestrator | Friday 29 August 2025 17:26:54 +0000 (0:00:02.356) 0:04:42.143 ********* 2025-08-29 17:28:51.307756 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.307762 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.307768 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.307774 | orchestrator | 2025-08-29 17:28:51.307780 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-08-29 17:28:51.307787 | orchestrator | Friday 29 August 2025 17:26:57 +0000 (0:00:03.051) 0:04:45.195 ********* 2025-08-29 17:28:51.307793 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-08-29 17:28:51.307799 | orchestrator | 2025-08-29 17:28:51.307805 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-08-29 17:28:51.307811 | orchestrator | Friday 29 August 2025 17:26:58 +0000 (0:00:00.845) 0:04:46.040 ********* 2025-08-29 17:28:51.307818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 17:28:51.307824 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.307831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 17:28:51.307841 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.307848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 17:28:51.307854 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.307860 | orchestrator | 2025-08-29 17:28:51.307867 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-08-29 17:28:51.307873 | orchestrator | Friday 29 August 2025 17:26:59 +0000 (0:00:01.346) 0:04:47.386 ********* 2025-08-29 17:28:51.307879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 17:28:51.307886 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.307892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 17:28:51.307898 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.307921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 17:28:51.307928 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.307935 | orchestrator | 2025-08-29 17:28:51.307941 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-08-29 17:28:51.307961 | orchestrator | Friday 29 August 2025 17:27:00 +0000 (0:00:01.557) 0:04:48.943 ********* 2025-08-29 17:28:51.307968 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.307974 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.307980 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.307986 | orchestrator | 2025-08-29 17:28:51.307992 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 17:28:51.307998 | orchestrator | Friday 29 August 2025 17:27:02 +0000 (0:00:01.571) 0:04:50.515 ********* 2025-08-29 17:28:51.308004 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.308014 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.308021 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.308027 | orchestrator | 2025-08-29 17:28:51.308033 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 17:28:51.308039 | orchestrator | Friday 29 August 2025 17:27:04 +0000 (0:00:02.429) 0:04:52.944 ********* 2025-08-29 17:28:51.308045 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.308051 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.308060 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.308067 | orchestrator | 2025-08-29 17:28:51.308073 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-08-29 17:28:51.308079 | orchestrator | Friday 29 August 2025 17:27:08 +0000 (0:00:03.305) 0:04:56.249 ********* 2025-08-29 17:28:51.308085 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.308091 | orchestrator | 2025-08-29 17:28:51.308098 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-08-29 17:28:51.308104 | orchestrator | Friday 29 August 2025 17:27:09 +0000 (0:00:01.685) 0:04:57.935 ********* 2025-08-29 17:28:51.308110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.308117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:28:51.308124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.308148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.308156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.308170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.308177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:28:51.308183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.308190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.308212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.308219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.308236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:28:51.308242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.308249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.308256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.308262 | orchestrator | 2025-08-29 17:28:51.308268 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-08-29 17:28:51.308275 | orchestrator | Friday 29 August 2025 17:27:13 +0000 (0:00:03.675) 0:05:01.611 ********* 2025-08-29 17:28:51.308297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.308311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:28:51.308321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.308342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.308349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.308355 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.308362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.308386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:28:51.308398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.308404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.308414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.308421 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.308427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.308434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:28:51.308440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.308469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:28:51.308476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:28:51.308483 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.308489 | orchestrator | 2025-08-29 17:28:51.308495 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-08-29 17:28:51.308501 | orchestrator | Friday 29 August 2025 17:27:14 +0000 (0:00:00.685) 0:05:02.297 ********* 2025-08-29 17:28:51.308511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 17:28:51.308517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 17:28:51.308524 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.308530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 17:28:51.308536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 17:28:51.308543 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.308549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 17:28:51.308555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 17:28:51.308561 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.308568 | orchestrator | 2025-08-29 17:28:51.308574 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-08-29 17:28:51.308580 | orchestrator | Friday 29 August 2025 17:27:15 +0000 (0:00:01.453) 0:05:03.750 ********* 2025-08-29 17:28:51.308586 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.308592 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.308598 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.308604 | orchestrator | 2025-08-29 17:28:51.308610 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-08-29 17:28:51.308621 | orchestrator | Friday 29 August 2025 17:27:17 +0000 (0:00:01.469) 0:05:05.219 ********* 2025-08-29 17:28:51.308627 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.308633 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.308640 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.308646 | orchestrator | 2025-08-29 17:28:51.308652 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-08-29 17:28:51.308658 | orchestrator | Friday 29 August 2025 17:27:19 +0000 (0:00:02.193) 0:05:07.413 ********* 2025-08-29 17:28:51.308664 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.308670 | orchestrator | 2025-08-29 17:28:51.308676 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-08-29 17:28:51.308682 | orchestrator | Friday 29 August 2025 17:27:20 +0000 (0:00:01.461) 0:05:08.874 ********* 2025-08-29 17:28:51.308705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:28:51.308714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:28:51.308724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:28:51.308731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:28:51.308759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:28:51.308768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:28:51.308775 | orchestrator | 2025-08-29 17:28:51.308781 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-08-29 17:28:51.308787 | orchestrator | Friday 29 August 2025 17:27:26 +0000 (0:00:05.661) 0:05:14.536 ********* 2025-08-29 17:28:51.308797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 17:28:51.308804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 17:28:51.308816 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.308822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 17:28:51.308846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 17:28:51.308854 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.308864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 17:28:51.308871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 17:28:51.308882 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.308888 | orchestrator | 2025-08-29 17:28:51.308894 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-08-29 17:28:51.308900 | orchestrator | Friday 29 August 2025 17:27:27 +0000 (0:00:00.695) 0:05:15.231 ********* 2025-08-29 17:28:51.308906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 17:28:51.308912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 17:28:51.308919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 17:28:51.308925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 17:28:51.308947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 17:28:51.308955 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.308961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 17:28:51.308967 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.308973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 17:28:51.308980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 17:28:51.308986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 17:28:51.308992 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.308999 | orchestrator | 2025-08-29 17:28:51.309008 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-08-29 17:28:51.309014 | orchestrator | Friday 29 August 2025 17:27:28 +0000 (0:00:00.983) 0:05:16.215 ********* 2025-08-29 17:28:51.309021 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.309027 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.309052 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.309059 | orchestrator | 2025-08-29 17:28:51.309070 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-08-29 17:28:51.309076 | orchestrator | Friday 29 August 2025 17:27:29 +0000 (0:00:00.823) 0:05:17.038 ********* 2025-08-29 17:28:51.309082 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.309089 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.309095 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.309101 | orchestrator | 2025-08-29 17:28:51.309107 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-08-29 17:28:51.309113 | orchestrator | Friday 29 August 2025 17:27:30 +0000 (0:00:01.376) 0:05:18.415 ********* 2025-08-29 17:28:51.309120 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.309126 | orchestrator | 2025-08-29 17:28:51.309132 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-08-29 17:28:51.309138 | orchestrator | Friday 29 August 2025 17:27:31 +0000 (0:00:01.593) 0:05:20.008 ********* 2025-08-29 17:28:51.309145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 17:28:51.309152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:28:51.309176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 17:28:51.309184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:28:51.309206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:28:51.309226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:28:51.309257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 17:28:51.309265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:28:51.309279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:28:51.309299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 17:28:51.309309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 17:28:51.309316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:28:51.309356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 17:28:51.309363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 17:28:51.309374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:28:51.309401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 17:28:51.309408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 17:28:51.309415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:28:51.309437 | orchestrator | 2025-08-29 17:28:51.309444 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-08-29 17:28:51.309454 | orchestrator | Friday 29 August 2025 17:27:36 +0000 (0:00:04.168) 0:05:24.177 ********* 2025-08-29 17:28:51.309460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 17:28:51.309470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:28:51.309477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:28:51.309499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 17:28:51.309510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 17:28:51.309520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:28:51.309539 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.309546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 17:28:51.309552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:28:51.309561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:28:51.309590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 17:28:51.309597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 17:28:51.309604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 17:28:51.309624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:28:51.309642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:28:51.309648 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.309655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:28:51.309677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 17:28:51.309689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 17:28:51.309698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:28:51.309711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:28:51.309718 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.309724 | orchestrator | 2025-08-29 17:28:51.309730 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-08-29 17:28:51.309737 | orchestrator | Friday 29 August 2025 17:27:37 +0000 (0:00:01.428) 0:05:25.605 ********* 2025-08-29 17:28:51.309743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 17:28:51.309749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 17:28:51.309760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 17:28:51.309767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 17:28:51.309774 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.309780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 17:28:51.309789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 17:28:51.309796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 17:28:51.309803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 17:28:51.309809 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.309815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 17:28:51.309825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 17:28:51.309831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 17:28:51.309838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 17:28:51.309844 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.309850 | orchestrator | 2025-08-29 17:28:51.309857 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-08-29 17:28:51.309863 | orchestrator | Friday 29 August 2025 17:27:38 +0000 (0:00:01.010) 0:05:26.615 ********* 2025-08-29 17:28:51.309869 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.309875 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.309882 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.309888 | orchestrator | 2025-08-29 17:28:51.309894 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-08-29 17:28:51.309901 | orchestrator | Friday 29 August 2025 17:27:39 +0000 (0:00:00.497) 0:05:27.113 ********* 2025-08-29 17:28:51.309907 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.309913 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.309919 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.309929 | orchestrator | 2025-08-29 17:28:51.309935 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-08-29 17:28:51.309942 | orchestrator | Friday 29 August 2025 17:27:40 +0000 (0:00:01.484) 0:05:28.598 ********* 2025-08-29 17:28:51.309948 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.309954 | orchestrator | 2025-08-29 17:28:51.309960 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-08-29 17:28:51.309966 | orchestrator | Friday 29 August 2025 17:27:42 +0000 (0:00:01.808) 0:05:30.407 ********* 2025-08-29 17:28:51.309972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:28:51.309983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:28:51.309993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:28:51.310000 | orchestrator | 2025-08-29 17:28:51.310006 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-08-29 17:28:51.310012 | orchestrator | Friday 29 August 2025 17:27:45 +0000 (0:00:02.773) 0:05:33.180 ********* 2025-08-29 17:28:51.310043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 17:28:51.310057 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.310064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 17:28:51.310070 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.310081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 17:28:51.310088 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.310094 | orchestrator | 2025-08-29 17:28:51.310100 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-08-29 17:28:51.310107 | orchestrator | Friday 29 August 2025 17:27:45 +0000 (0:00:00.431) 0:05:33.612 ********* 2025-08-29 17:28:51.310113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 17:28:51.310119 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.310128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 17:28:51.310135 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.310141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 17:28:51.310147 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.310153 | orchestrator | 2025-08-29 17:28:51.310163 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-08-29 17:28:51.310170 | orchestrator | Friday 29 August 2025 17:27:46 +0000 (0:00:01.011) 0:05:34.624 ********* 2025-08-29 17:28:51.310176 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.310182 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.310188 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.310194 | orchestrator | 2025-08-29 17:28:51.310200 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-08-29 17:28:51.310207 | orchestrator | Friday 29 August 2025 17:27:47 +0000 (0:00:00.475) 0:05:35.100 ********* 2025-08-29 17:28:51.310213 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.310219 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.310225 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.310231 | orchestrator | 2025-08-29 17:28:51.310237 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-08-29 17:28:51.310243 | orchestrator | Friday 29 August 2025 17:27:48 +0000 (0:00:01.435) 0:05:36.535 ********* 2025-08-29 17:28:51.310250 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:28:51.310256 | orchestrator | 2025-08-29 17:28:51.310262 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-08-29 17:28:51.310268 | orchestrator | Friday 29 August 2025 17:27:50 +0000 (0:00:01.841) 0:05:38.377 ********* 2025-08-29 17:28:51.310274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.310284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.310291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.310305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.310313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.310319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 17:28:51.310337 | orchestrator | 2025-08-29 17:28:51.310347 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-08-29 17:28:51.310354 | orchestrator | Friday 29 August 2025 17:27:56 +0000 (0:00:06.621) 0:05:44.999 ********* 2025-08-29 17:28:51.310360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.310374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.310381 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.310387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.310394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.310400 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.310410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.310425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 17:28:51.310432 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.310438 | orchestrator | 2025-08-29 17:28:51.310445 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-08-29 17:28:51.310451 | orchestrator | Friday 29 August 2025 17:27:57 +0000 (0:00:00.637) 0:05:45.636 ********* 2025-08-29 17:28:51.310457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 17:28:51.310463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 17:28:51.310470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 17:28:51.310476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 17:28:51.310482 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.310488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 17:28:51.310495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 17:28:51.310501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 17:28:51.310507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 17:28:51.310514 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.310520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 17:28:51.310529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 17:28:51.310536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 17:28:51.310546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 17:28:51.310552 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.310559 | orchestrator | 2025-08-29 17:28:51.310565 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-08-29 17:28:51.310571 | orchestrator | Friday 29 August 2025 17:27:59 +0000 (0:00:01.706) 0:05:47.343 ********* 2025-08-29 17:28:51.310577 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.310583 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.310589 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.310595 | orchestrator | 2025-08-29 17:28:51.310601 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-08-29 17:28:51.310608 | orchestrator | Friday 29 August 2025 17:28:00 +0000 (0:00:01.379) 0:05:48.722 ********* 2025-08-29 17:28:51.310614 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.310620 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.310626 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.310632 | orchestrator | 2025-08-29 17:28:51.310638 | orchestrator | TASK [include_role : swift] **************************************************** 2025-08-29 17:28:51.310644 | orchestrator | Friday 29 August 2025 17:28:02 +0000 (0:00:02.301) 0:05:51.023 ********* 2025-08-29 17:28:51.310653 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.310660 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.310666 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.310672 | orchestrator | 2025-08-29 17:28:51.310678 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-08-29 17:28:51.310684 | orchestrator | Friday 29 August 2025 17:28:03 +0000 (0:00:00.386) 0:05:51.410 ********* 2025-08-29 17:28:51.310690 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.310696 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.310702 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.310708 | orchestrator | 2025-08-29 17:28:51.310714 | orchestrator | TASK [include_role : trove] **************************************************** 2025-08-29 17:28:51.310721 | orchestrator | Friday 29 August 2025 17:28:03 +0000 (0:00:00.332) 0:05:51.743 ********* 2025-08-29 17:28:51.310727 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.310733 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.310739 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.310746 | orchestrator | 2025-08-29 17:28:51.310752 | orchestrator | TASK [include_role : venus] **************************************************** 2025-08-29 17:28:51.310758 | orchestrator | Friday 29 August 2025 17:28:04 +0000 (0:00:00.769) 0:05:52.512 ********* 2025-08-29 17:28:51.310764 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.310770 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.310776 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.310782 | orchestrator | 2025-08-29 17:28:51.310788 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-08-29 17:28:51.310795 | orchestrator | Friday 29 August 2025 17:28:04 +0000 (0:00:00.376) 0:05:52.889 ********* 2025-08-29 17:28:51.310801 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.310807 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.310813 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.310819 | orchestrator | 2025-08-29 17:28:51.310825 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-08-29 17:28:51.310832 | orchestrator | Friday 29 August 2025 17:28:05 +0000 (0:00:00.325) 0:05:53.215 ********* 2025-08-29 17:28:51.310838 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.310844 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.310850 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.310860 | orchestrator | 2025-08-29 17:28:51.310866 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-08-29 17:28:51.310872 | orchestrator | Friday 29 August 2025 17:28:06 +0000 (0:00:00.970) 0:05:54.186 ********* 2025-08-29 17:28:51.310878 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.310885 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.310891 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.310897 | orchestrator | 2025-08-29 17:28:51.310903 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-08-29 17:28:51.310909 | orchestrator | Friday 29 August 2025 17:28:06 +0000 (0:00:00.731) 0:05:54.917 ********* 2025-08-29 17:28:51.310915 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.310922 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.310928 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.310934 | orchestrator | 2025-08-29 17:28:51.310940 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-08-29 17:28:51.310946 | orchestrator | Friday 29 August 2025 17:28:07 +0000 (0:00:00.416) 0:05:55.334 ********* 2025-08-29 17:28:51.310952 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.310959 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.310965 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.310971 | orchestrator | 2025-08-29 17:28:51.310977 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-08-29 17:28:51.310983 | orchestrator | Friday 29 August 2025 17:28:08 +0000 (0:00:00.894) 0:05:56.229 ********* 2025-08-29 17:28:51.310990 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.310996 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.311002 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.311008 | orchestrator | 2025-08-29 17:28:51.311014 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-08-29 17:28:51.311020 | orchestrator | Friday 29 August 2025 17:28:09 +0000 (0:00:01.298) 0:05:57.528 ********* 2025-08-29 17:28:51.311026 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.311032 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.311041 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.311047 | orchestrator | 2025-08-29 17:28:51.311054 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-08-29 17:28:51.311060 | orchestrator | Friday 29 August 2025 17:28:10 +0000 (0:00:00.927) 0:05:58.455 ********* 2025-08-29 17:28:51.311066 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.311072 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.311078 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.311084 | orchestrator | 2025-08-29 17:28:51.311091 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-08-29 17:28:51.311097 | orchestrator | Friday 29 August 2025 17:28:20 +0000 (0:00:10.022) 0:06:08.478 ********* 2025-08-29 17:28:51.311103 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.311109 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.311115 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.311121 | orchestrator | 2025-08-29 17:28:51.311127 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-08-29 17:28:51.311133 | orchestrator | Friday 29 August 2025 17:28:21 +0000 (0:00:00.950) 0:06:09.428 ********* 2025-08-29 17:28:51.311139 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.311146 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.311152 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.311158 | orchestrator | 2025-08-29 17:28:51.311164 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-08-29 17:28:51.311170 | orchestrator | Friday 29 August 2025 17:28:30 +0000 (0:00:08.923) 0:06:18.351 ********* 2025-08-29 17:28:51.311176 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.311182 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.311188 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.311194 | orchestrator | 2025-08-29 17:28:51.311200 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-08-29 17:28:51.311211 | orchestrator | Friday 29 August 2025 17:28:33 +0000 (0:00:03.594) 0:06:21.946 ********* 2025-08-29 17:28:51.311217 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:28:51.311223 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:28:51.311232 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:28:51.311238 | orchestrator | 2025-08-29 17:28:51.311244 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-08-29 17:28:51.311251 | orchestrator | Friday 29 August 2025 17:28:43 +0000 (0:00:09.593) 0:06:31.539 ********* 2025-08-29 17:28:51.311257 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.311263 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.311269 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.311275 | orchestrator | 2025-08-29 17:28:51.311282 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-08-29 17:28:51.311288 | orchestrator | Friday 29 August 2025 17:28:43 +0000 (0:00:00.439) 0:06:31.979 ********* 2025-08-29 17:28:51.311294 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.311300 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.311306 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.311312 | orchestrator | 2025-08-29 17:28:51.311318 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-08-29 17:28:51.311361 | orchestrator | Friday 29 August 2025 17:28:44 +0000 (0:00:00.420) 0:06:32.400 ********* 2025-08-29 17:28:51.311368 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.311374 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.311381 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.311387 | orchestrator | 2025-08-29 17:28:51.311393 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-08-29 17:28:51.311399 | orchestrator | Friday 29 August 2025 17:28:45 +0000 (0:00:00.851) 0:06:33.252 ********* 2025-08-29 17:28:51.311405 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.311411 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.311417 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.311423 | orchestrator | 2025-08-29 17:28:51.311430 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-08-29 17:28:51.311436 | orchestrator | Friday 29 August 2025 17:28:45 +0000 (0:00:00.430) 0:06:33.682 ********* 2025-08-29 17:28:51.311442 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.311448 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.311454 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.311460 | orchestrator | 2025-08-29 17:28:51.311467 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-08-29 17:28:51.311473 | orchestrator | Friday 29 August 2025 17:28:46 +0000 (0:00:00.443) 0:06:34.126 ********* 2025-08-29 17:28:51.311479 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:28:51.311485 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:28:51.311491 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:28:51.311497 | orchestrator | 2025-08-29 17:28:51.311503 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-08-29 17:28:51.311509 | orchestrator | Friday 29 August 2025 17:28:46 +0000 (0:00:00.432) 0:06:34.559 ********* 2025-08-29 17:28:51.311515 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.311522 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.311528 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.311534 | orchestrator | 2025-08-29 17:28:51.311540 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-08-29 17:28:51.311546 | orchestrator | Friday 29 August 2025 17:28:48 +0000 (0:00:01.529) 0:06:36.088 ********* 2025-08-29 17:28:51.311552 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:28:51.311558 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:28:51.311565 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:28:51.311571 | orchestrator | 2025-08-29 17:28:51.311577 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:28:51.311583 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 17:28:51.311594 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 17:28:51.311601 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 17:28:51.311607 | orchestrator | 2025-08-29 17:28:51.311613 | orchestrator | 2025-08-29 17:28:51.311623 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:28:51.311629 | orchestrator | Friday 29 August 2025 17:28:49 +0000 (0:00:00.991) 0:06:37.080 ********* 2025-08-29 17:28:51.311636 | orchestrator | =============================================================================== 2025-08-29 17:28:51.311642 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.02s 2025-08-29 17:28:51.311648 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.59s 2025-08-29 17:28:51.311654 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.92s 2025-08-29 17:28:51.311660 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.62s 2025-08-29 17:28:51.311666 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.89s 2025-08-29 17:28:51.311672 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.66s 2025-08-29 17:28:51.311678 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.11s 2025-08-29 17:28:51.311685 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.84s 2025-08-29 17:28:51.311691 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.84s 2025-08-29 17:28:51.311697 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.79s 2025-08-29 17:28:51.311703 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.45s 2025-08-29 17:28:51.311709 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.38s 2025-08-29 17:28:51.311715 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.31s 2025-08-29 17:28:51.311724 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.25s 2025-08-29 17:28:51.311731 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.17s 2025-08-29 17:28:51.311737 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.14s 2025-08-29 17:28:51.311742 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.14s 2025-08-29 17:28:51.311748 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.07s 2025-08-29 17:28:51.311753 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.04s 2025-08-29 17:28:51.311758 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.99s 2025-08-29 17:28:51.311764 | orchestrator | 2025-08-29 17:28:51 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:51.311769 | orchestrator | 2025-08-29 17:28:51 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:28:51.311775 | orchestrator | 2025-08-29 17:28:51 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:28:51.311780 | orchestrator | 2025-08-29 17:28:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:54.340795 | orchestrator | 2025-08-29 17:28:54 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:54.342947 | orchestrator | 2025-08-29 17:28:54 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:28:54.345312 | orchestrator | 2025-08-29 17:28:54 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:28:54.345714 | orchestrator | 2025-08-29 17:28:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:28:57.386307 | orchestrator | 2025-08-29 17:28:57 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:28:57.388838 | orchestrator | 2025-08-29 17:28:57 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:28:57.390724 | orchestrator | 2025-08-29 17:28:57 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:28:57.390757 | orchestrator | 2025-08-29 17:28:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:00.439821 | orchestrator | 2025-08-29 17:29:00 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:00.440223 | orchestrator | 2025-08-29 17:29:00 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:00.441094 | orchestrator | 2025-08-29 17:29:00 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:00.441119 | orchestrator | 2025-08-29 17:29:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:03.498088 | orchestrator | 2025-08-29 17:29:03 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:03.499390 | orchestrator | 2025-08-29 17:29:03 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:03.500469 | orchestrator | 2025-08-29 17:29:03 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:03.500636 | orchestrator | 2025-08-29 17:29:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:06.563818 | orchestrator | 2025-08-29 17:29:06 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:06.564283 | orchestrator | 2025-08-29 17:29:06 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:06.567881 | orchestrator | 2025-08-29 17:29:06 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:06.567910 | orchestrator | 2025-08-29 17:29:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:09.596667 | orchestrator | 2025-08-29 17:29:09 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:09.597781 | orchestrator | 2025-08-29 17:29:09 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:09.599542 | orchestrator | 2025-08-29 17:29:09 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:09.599592 | orchestrator | 2025-08-29 17:29:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:12.635190 | orchestrator | 2025-08-29 17:29:12 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:12.641323 | orchestrator | 2025-08-29 17:29:12 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:12.642105 | orchestrator | 2025-08-29 17:29:12 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:12.645456 | orchestrator | 2025-08-29 17:29:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:15.688228 | orchestrator | 2025-08-29 17:29:15 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:15.688890 | orchestrator | 2025-08-29 17:29:15 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:15.690672 | orchestrator | 2025-08-29 17:29:15 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:15.690696 | orchestrator | 2025-08-29 17:29:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:18.722540 | orchestrator | 2025-08-29 17:29:18 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:18.723299 | orchestrator | 2025-08-29 17:29:18 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:18.728156 | orchestrator | 2025-08-29 17:29:18 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:18.728449 | orchestrator | 2025-08-29 17:29:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:21.772870 | orchestrator | 2025-08-29 17:29:21 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:21.773598 | orchestrator | 2025-08-29 17:29:21 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:21.774190 | orchestrator | 2025-08-29 17:29:21 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:21.774214 | orchestrator | 2025-08-29 17:29:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:24.813280 | orchestrator | 2025-08-29 17:29:24 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:24.814263 | orchestrator | 2025-08-29 17:29:24 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:24.815517 | orchestrator | 2025-08-29 17:29:24 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:24.816213 | orchestrator | 2025-08-29 17:29:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:27.860625 | orchestrator | 2025-08-29 17:29:27 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:27.863772 | orchestrator | 2025-08-29 17:29:27 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:27.863812 | orchestrator | 2025-08-29 17:29:27 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:27.863826 | orchestrator | 2025-08-29 17:29:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:30.909933 | orchestrator | 2025-08-29 17:29:30 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:30.910075 | orchestrator | 2025-08-29 17:29:30 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:30.911081 | orchestrator | 2025-08-29 17:29:30 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:30.911107 | orchestrator | 2025-08-29 17:29:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:33.963006 | orchestrator | 2025-08-29 17:29:33 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:33.965173 | orchestrator | 2025-08-29 17:29:33 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:33.966751 | orchestrator | 2025-08-29 17:29:33 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:33.966890 | orchestrator | 2025-08-29 17:29:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:37.013789 | orchestrator | 2025-08-29 17:29:37 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:37.014960 | orchestrator | 2025-08-29 17:29:37 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:37.016878 | orchestrator | 2025-08-29 17:29:37 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:37.016961 | orchestrator | 2025-08-29 17:29:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:40.057185 | orchestrator | 2025-08-29 17:29:40 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:40.060324 | orchestrator | 2025-08-29 17:29:40 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:40.062378 | orchestrator | 2025-08-29 17:29:40 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:40.062436 | orchestrator | 2025-08-29 17:29:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:43.106430 | orchestrator | 2025-08-29 17:29:43 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:43.107570 | orchestrator | 2025-08-29 17:29:43 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:43.108814 | orchestrator | 2025-08-29 17:29:43 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:43.108851 | orchestrator | 2025-08-29 17:29:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:46.146423 | orchestrator | 2025-08-29 17:29:46 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:46.147012 | orchestrator | 2025-08-29 17:29:46 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:46.148942 | orchestrator | 2025-08-29 17:29:46 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:46.148971 | orchestrator | 2025-08-29 17:29:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:49.192250 | orchestrator | 2025-08-29 17:29:49 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:49.195481 | orchestrator | 2025-08-29 17:29:49 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:49.197036 | orchestrator | 2025-08-29 17:29:49 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:49.197090 | orchestrator | 2025-08-29 17:29:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:52.232447 | orchestrator | 2025-08-29 17:29:52 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:52.233177 | orchestrator | 2025-08-29 17:29:52 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:52.234404 | orchestrator | 2025-08-29 17:29:52 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:52.234562 | orchestrator | 2025-08-29 17:29:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:55.285100 | orchestrator | 2025-08-29 17:29:55 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:55.286452 | orchestrator | 2025-08-29 17:29:55 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:55.288065 | orchestrator | 2025-08-29 17:29:55 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:55.288171 | orchestrator | 2025-08-29 17:29:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:29:58.332110 | orchestrator | 2025-08-29 17:29:58 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:29:58.332273 | orchestrator | 2025-08-29 17:29:58 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:29:58.334064 | orchestrator | 2025-08-29 17:29:58 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:29:58.334079 | orchestrator | 2025-08-29 17:29:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:01.386841 | orchestrator | 2025-08-29 17:30:01 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:30:01.387586 | orchestrator | 2025-08-29 17:30:01 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:01.389233 | orchestrator | 2025-08-29 17:30:01 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:01.389260 | orchestrator | 2025-08-29 17:30:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:04.418003 | orchestrator | 2025-08-29 17:30:04 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:30:04.419971 | orchestrator | 2025-08-29 17:30:04 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:04.421671 | orchestrator | 2025-08-29 17:30:04 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:04.421696 | orchestrator | 2025-08-29 17:30:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:07.452072 | orchestrator | 2025-08-29 17:30:07 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:30:07.452796 | orchestrator | 2025-08-29 17:30:07 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:07.453931 | orchestrator | 2025-08-29 17:30:07 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:07.453949 | orchestrator | 2025-08-29 17:30:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:10.489820 | orchestrator | 2025-08-29 17:30:10 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:30:10.490678 | orchestrator | 2025-08-29 17:30:10 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:10.492049 | orchestrator | 2025-08-29 17:30:10 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:10.492111 | orchestrator | 2025-08-29 17:30:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:13.524602 | orchestrator | 2025-08-29 17:30:13 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:30:13.525242 | orchestrator | 2025-08-29 17:30:13 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:13.527115 | orchestrator | 2025-08-29 17:30:13 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:13.527147 | orchestrator | 2025-08-29 17:30:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:16.561266 | orchestrator | 2025-08-29 17:30:16 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:30:16.562331 | orchestrator | 2025-08-29 17:30:16 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:16.563654 | orchestrator | 2025-08-29 17:30:16 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:16.563675 | orchestrator | 2025-08-29 17:30:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:19.615578 | orchestrator | 2025-08-29 17:30:19 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:30:19.617965 | orchestrator | 2025-08-29 17:30:19 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:19.619906 | orchestrator | 2025-08-29 17:30:19 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:19.619956 | orchestrator | 2025-08-29 17:30:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:22.661382 | orchestrator | 2025-08-29 17:30:22 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:30:22.662595 | orchestrator | 2025-08-29 17:30:22 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:22.664769 | orchestrator | 2025-08-29 17:30:22 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:22.664800 | orchestrator | 2025-08-29 17:30:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:25.709791 | orchestrator | 2025-08-29 17:30:25 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:30:25.710582 | orchestrator | 2025-08-29 17:30:25 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:25.711975 | orchestrator | 2025-08-29 17:30:25 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:25.712000 | orchestrator | 2025-08-29 17:30:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:28.773804 | orchestrator | 2025-08-29 17:30:28 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:30:28.776697 | orchestrator | 2025-08-29 17:30:28 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:28.779067 | orchestrator | 2025-08-29 17:30:28 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:28.779102 | orchestrator | 2025-08-29 17:30:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:31.835856 | orchestrator | 2025-08-29 17:30:31 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:30:31.837811 | orchestrator | 2025-08-29 17:30:31 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:31.839454 | orchestrator | 2025-08-29 17:30:31 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:31.839626 | orchestrator | 2025-08-29 17:30:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:34.885650 | orchestrator | 2025-08-29 17:30:34 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:30:34.885829 | orchestrator | 2025-08-29 17:30:34 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:34.886817 | orchestrator | 2025-08-29 17:30:34 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:34.887216 | orchestrator | 2025-08-29 17:30:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:37.940029 | orchestrator | 2025-08-29 17:30:37 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:30:37.940946 | orchestrator | 2025-08-29 17:30:37 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:37.941986 | orchestrator | 2025-08-29 17:30:37 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:37.942064 | orchestrator | 2025-08-29 17:30:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:40.985740 | orchestrator | 2025-08-29 17:30:40 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:30:40.988600 | orchestrator | 2025-08-29 17:30:40 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:40.990744 | orchestrator | 2025-08-29 17:30:40 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:40.990773 | orchestrator | 2025-08-29 17:30:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:44.042076 | orchestrator | 2025-08-29 17:30:44 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state STARTED 2025-08-29 17:30:44.045374 | orchestrator | 2025-08-29 17:30:44 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:44.047799 | orchestrator | 2025-08-29 17:30:44 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:44.048052 | orchestrator | 2025-08-29 17:30:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:47.085026 | orchestrator | 2025-08-29 17:30:47 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:30:47.090272 | orchestrator | 2025-08-29 17:30:47 | INFO  | Task 58b73c74-f2c2-4e5e-8347-27cf2b55a92b is in state SUCCESS 2025-08-29 17:30:47.092555 | orchestrator | 2025-08-29 17:30:47.092569 | orchestrator | 2025-08-29 17:30:47.092574 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-08-29 17:30:47.092578 | orchestrator | 2025-08-29 17:30:47.092583 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-08-29 17:30:47.092587 | orchestrator | Friday 29 August 2025 17:19:07 +0000 (0:00:01.166) 0:00:01.166 ********* 2025-08-29 17:30:47.092592 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.092598 | orchestrator | 2025-08-29 17:30:47.092602 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-08-29 17:30:47.092606 | orchestrator | Friday 29 August 2025 17:19:09 +0000 (0:00:01.551) 0:00:02.718 ********* 2025-08-29 17:30:47.092610 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.092615 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.092619 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.092623 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.092627 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.092631 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.092635 | orchestrator | 2025-08-29 17:30:47.092639 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-08-29 17:30:47.092643 | orchestrator | Friday 29 August 2025 17:19:10 +0000 (0:00:01.827) 0:00:04.545 ********* 2025-08-29 17:30:47.092647 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.092651 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.092655 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.092659 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.092663 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.092667 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.092671 | orchestrator | 2025-08-29 17:30:47.092675 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-08-29 17:30:47.092679 | orchestrator | Friday 29 August 2025 17:19:12 +0000 (0:00:01.388) 0:00:05.934 ********* 2025-08-29 17:30:47.092683 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.092687 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.092691 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.092695 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.092699 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.092703 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.092707 | orchestrator | 2025-08-29 17:30:47.092711 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-08-29 17:30:47.092715 | orchestrator | Friday 29 August 2025 17:19:13 +0000 (0:00:01.064) 0:00:06.999 ********* 2025-08-29 17:30:47.092719 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.092723 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.092727 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.092731 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.092735 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.092739 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.092743 | orchestrator | 2025-08-29 17:30:47.092747 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-08-29 17:30:47.092751 | orchestrator | Friday 29 August 2025 17:19:14 +0000 (0:00:01.105) 0:00:08.105 ********* 2025-08-29 17:30:47.092755 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.092759 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.092763 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.092767 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.092784 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.092788 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.092792 | orchestrator | 2025-08-29 17:30:47.092796 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-08-29 17:30:47.092800 | orchestrator | Friday 29 August 2025 17:19:15 +0000 (0:00:00.657) 0:00:08.762 ********* 2025-08-29 17:30:47.092804 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.092814 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.092818 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.092822 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.092826 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.092830 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.092834 | orchestrator | 2025-08-29 17:30:47.092838 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-08-29 17:30:47.092842 | orchestrator | Friday 29 August 2025 17:19:16 +0000 (0:00:00.937) 0:00:09.700 ********* 2025-08-29 17:30:47.092846 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.092850 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.092854 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.092858 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.092862 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.092866 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.092870 | orchestrator | 2025-08-29 17:30:47.092874 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-08-29 17:30:47.092878 | orchestrator | Friday 29 August 2025 17:19:16 +0000 (0:00:00.937) 0:00:10.638 ********* 2025-08-29 17:30:47.092882 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.092886 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.092890 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.092894 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.092898 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.092902 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.092906 | orchestrator | 2025-08-29 17:30:47.092910 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-08-29 17:30:47.092914 | orchestrator | Friday 29 August 2025 17:19:17 +0000 (0:00:00.913) 0:00:11.552 ********* 2025-08-29 17:30:47.092918 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 17:30:47.092922 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 17:30:47.092926 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 17:30:47.092929 | orchestrator | 2025-08-29 17:30:47.092933 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-08-29 17:30:47.092937 | orchestrator | Friday 29 August 2025 17:19:18 +0000 (0:00:00.657) 0:00:12.209 ********* 2025-08-29 17:30:47.092942 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.092949 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.092954 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.092958 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.092962 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.092966 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.092970 | orchestrator | 2025-08-29 17:30:47.092979 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-08-29 17:30:47.092983 | orchestrator | Friday 29 August 2025 17:19:19 +0000 (0:00:01.352) 0:00:13.562 ********* 2025-08-29 17:30:47.092987 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 17:30:47.092991 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 17:30:47.092995 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 17:30:47.092999 | orchestrator | 2025-08-29 17:30:47.093003 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-08-29 17:30:47.093006 | orchestrator | Friday 29 August 2025 17:19:22 +0000 (0:00:02.891) 0:00:16.453 ********* 2025-08-29 17:30:47.093011 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 17:30:47.093018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 17:30:47.093022 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 17:30:47.093025 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093029 | orchestrator | 2025-08-29 17:30:47.093033 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-08-29 17:30:47.093037 | orchestrator | Friday 29 August 2025 17:19:23 +0000 (0:00:00.525) 0:00:16.978 ********* 2025-08-29 17:30:47.093042 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.093048 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.093052 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.093056 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093060 | orchestrator | 2025-08-29 17:30:47.093064 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-08-29 17:30:47.093068 | orchestrator | Friday 29 August 2025 17:19:24 +0000 (0:00:00.883) 0:00:17.862 ********* 2025-08-29 17:30:47.093073 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.093081 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.093088 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.093094 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093098 | orchestrator | 2025-08-29 17:30:47.093102 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-08-29 17:30:47.093106 | orchestrator | Friday 29 August 2025 17:19:24 +0000 (0:00:00.287) 0:00:18.149 ********* 2025-08-29 17:30:47.093114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-08-29 17:19:20.691775', 'end': '2025-08-29 17:19:20.975442', 'delta': '0:00:00.283667', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.093123 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-08-29 17:19:21.441451', 'end': '2025-08-29 17:19:21.719414', 'delta': '0:00:00.277963', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.093127 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-08-29 17:19:22.404173', 'end': '2025-08-29 17:19:22.669729', 'delta': '0:00:00.265556', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.093132 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093135 | orchestrator | 2025-08-29 17:30:47.093139 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-08-29 17:30:47.093143 | orchestrator | Friday 29 August 2025 17:19:25 +0000 (0:00:00.592) 0:00:18.742 ********* 2025-08-29 17:30:47.093147 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.093151 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.093155 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.093159 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.093163 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.093167 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.093170 | orchestrator | 2025-08-29 17:30:47.093174 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-08-29 17:30:47.093178 | orchestrator | Friday 29 August 2025 17:19:26 +0000 (0:00:01.680) 0:00:20.422 ********* 2025-08-29 17:30:47.093182 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:30:47.093186 | orchestrator | 2025-08-29 17:30:47.093190 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-08-29 17:30:47.093194 | orchestrator | Friday 29 August 2025 17:19:27 +0000 (0:00:00.670) 0:00:21.092 ********* 2025-08-29 17:30:47.093198 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093202 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.093206 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.093212 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.093216 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.093220 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.093224 | orchestrator | 2025-08-29 17:30:47.093228 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-08-29 17:30:47.093232 | orchestrator | Friday 29 August 2025 17:19:28 +0000 (0:00:01.370) 0:00:22.463 ********* 2025-08-29 17:30:47.093236 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093240 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.093243 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.093247 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.093251 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.093255 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.093259 | orchestrator | 2025-08-29 17:30:47.093263 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 17:30:47.093267 | orchestrator | Friday 29 August 2025 17:19:30 +0000 (0:00:01.875) 0:00:24.338 ********* 2025-08-29 17:30:47.093274 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093278 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.093281 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.093285 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.093289 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.093293 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.093297 | orchestrator | 2025-08-29 17:30:47.093301 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-08-29 17:30:47.093305 | orchestrator | Friday 29 August 2025 17:19:32 +0000 (0:00:01.500) 0:00:25.839 ********* 2025-08-29 17:30:47.093309 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093312 | orchestrator | 2025-08-29 17:30:47.093316 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-08-29 17:30:47.093320 | orchestrator | Friday 29 August 2025 17:19:32 +0000 (0:00:00.151) 0:00:25.990 ********* 2025-08-29 17:30:47.093324 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093328 | orchestrator | 2025-08-29 17:30:47.093332 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 17:30:47.093336 | orchestrator | Friday 29 August 2025 17:19:32 +0000 (0:00:00.314) 0:00:26.305 ********* 2025-08-29 17:30:47.093340 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093520 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.093526 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.093530 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.093534 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.093538 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.093542 | orchestrator | 2025-08-29 17:30:47.093549 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-08-29 17:30:47.093553 | orchestrator | Friday 29 August 2025 17:19:33 +0000 (0:00:00.896) 0:00:27.201 ********* 2025-08-29 17:30:47.093557 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093561 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.093565 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.093569 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.093572 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.093576 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.093580 | orchestrator | 2025-08-29 17:30:47.093584 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-08-29 17:30:47.093588 | orchestrator | Friday 29 August 2025 17:19:34 +0000 (0:00:01.236) 0:00:28.438 ********* 2025-08-29 17:30:47.093592 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093596 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.093600 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.093604 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.093608 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.093612 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.093616 | orchestrator | 2025-08-29 17:30:47.093620 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-08-29 17:30:47.093624 | orchestrator | Friday 29 August 2025 17:19:36 +0000 (0:00:01.547) 0:00:29.985 ********* 2025-08-29 17:30:47.093628 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093631 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.093635 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.093639 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.093643 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.093647 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.093651 | orchestrator | 2025-08-29 17:30:47.093655 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-08-29 17:30:47.093659 | orchestrator | Friday 29 August 2025 17:19:37 +0000 (0:00:01.322) 0:00:31.307 ********* 2025-08-29 17:30:47.093663 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093667 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.093675 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.093679 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.093682 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.093686 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.093693 | orchestrator | 2025-08-29 17:30:47.093699 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-08-29 17:30:47.093703 | orchestrator | Friday 29 August 2025 17:19:38 +0000 (0:00:00.647) 0:00:31.955 ********* 2025-08-29 17:30:47.093707 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093711 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.093714 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.093718 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.093722 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.093726 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.093730 | orchestrator | 2025-08-29 17:30:47.093734 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-08-29 17:30:47.093738 | orchestrator | Friday 29 August 2025 17:19:39 +0000 (0:00:00.800) 0:00:32.755 ********* 2025-08-29 17:30:47.093742 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093746 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.093750 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.093753 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.093757 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.093761 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.093765 | orchestrator | 2025-08-29 17:30:47.093771 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-08-29 17:30:47.093776 | orchestrator | Friday 29 August 2025 17:19:40 +0000 (0:00:01.057) 0:00:33.813 ********* 2025-08-29 17:30:47.093780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b00dade2--f82b--53af--89a3--8c9250354ec6-osd--block--b00dade2--f82b--53af--89a3--8c9250354ec6', 'dm-uuid-LVM-VY8oO4jmapOTYN6w4zG3PV2M2NyDnMLhL8j5KdTyG1xSlCfRvP3XmgHoqydm0inH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8088253a--7e26--529d--8fdb--0f472c9bb5d3-osd--block--8088253a--7e26--529d--8fdb--0f472c9bb5d3', 'dm-uuid-LVM-FVeBMUPoAV9RcTz0ycqRnP1EtAKr6OFuAEc2nlv4hoplmaxvoQG2BgcNFvR0LK8g'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part1', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part14', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part15', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part16', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.093848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b00dade2--f82b--53af--89a3--8c9250354ec6-osd--block--b00dade2--f82b--53af--89a3--8c9250354ec6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Dkm75k-dgyQ-fHCc-DskW-R3kq-Avrn-VOvlhd', 'scsi-0QEMU_QEMU_HARDDISK_82706033-d7aa-4ff6-a1c3-b9e917369a8a', 'scsi-SQEMU_QEMU_HARDDISK_82706033-d7aa-4ff6-a1c3-b9e917369a8a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.093855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8088253a--7e26--529d--8fdb--0f472c9bb5d3-osd--block--8088253a--7e26--529d--8fdb--0f472c9bb5d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fo9JB4-Jkhr-Vfiv-ZF1s-HkrB-0RZo-O0eTPw', 'scsi-0QEMU_QEMU_HARDDISK_2fca4086-decf-4125-8890-d41999d174b7', 'scsi-SQEMU_QEMU_HARDDISK_2fca4086-decf-4125-8890-d41999d174b7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.093860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95e03a76-aedc-4db5-a6e6-17eb5f40fbcd', 'scsi-SQEMU_QEMU_HARDDISK_95e03a76-aedc-4db5-a6e6-17eb5f40fbcd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.093865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.093872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7cc16d54--75e9--5c21--b21a--878ce6efb3d6-osd--block--7cc16d54--75e9--5c21--b21a--878ce6efb3d6', 'dm-uuid-LVM-mGBVEzB4gcKM39Xx0aLE22bZ3zyymiRPX0QySadYR5ZdqE0ySp3sIrwpHhhyJTyJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53dd44b5--7849--5101--9e2a--fd90ac927c8f-osd--block--53dd44b5--7849--5101--9e2a--fd90ac927c8f', 'dm-uuid-LVM-tZ5rsxduFMnqzvdTHnqQLWccJnZQYchZIAFT1e5WnYYXln1r877aX72JY4ISW52H'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093897 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.093903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part1', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part14', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part15', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part16', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.093936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7cc16d54--75e9--5c21--b21a--878ce6efb3d6-osd--block--7cc16d54--75e9--5c21--b21a--878ce6efb3d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yYjIpl-QKq5-SuzB-H6iQ-dv9d-fL3r-4wbiAB', 'scsi-0QEMU_QEMU_HARDDISK_fde2b913-b6a3-4677-89e6-f2f4f6c968dc', 'scsi-SQEMU_QEMU_HARDDISK_fde2b913-b6a3-4677-89e6-f2f4f6c968dc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.093940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a4c19265--6381--5c6d--bd77--cfabc91aafa2-osd--block--a4c19265--6381--5c6d--bd77--cfabc91aafa2', 'dm-uuid-LVM-QSwdfYrrHmm7V51x7PzoPflTmwgQDmNf0ILcNWcv6jcDDetm7KKU0VlRyvTcFcbc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--53dd44b5--7849--5101--9e2a--fd90ac927c8f-osd--block--53dd44b5--7849--5101--9e2a--fd90ac927c8f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PG2lDJ-9inP-j75c-Ibws-o2zw-l7L3-HOqOBz', 'scsi-0QEMU_QEMU_HARDDISK_d9d2a167-7162-4dbb-9ae3-4acc9c24be60', 'scsi-SQEMU_QEMU_HARDDISK_d9d2a167-7162-4dbb-9ae3-4acc9c24be60'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.093955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_822f3ffb-f416-492b-baa5-9c709b0e03df', 'scsi-SQEMU_QEMU_HARDDISK_822f3ffb-f416-492b-baa5-9c709b0e03df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.093960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b12c38cd--5c6b--5ee1--93c6--dbb5afb60591-osd--block--b12c38cd--5c6b--5ee1--93c6--dbb5afb60591', 'dm-uuid-LVM-i9XIynlVD4XQHui1DTZfZNm2dtjd80d66kxMgcfPpxXv56ULLy5Z5x7FNHv6aseG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.093978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.093996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094004 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.094008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part1', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part14', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part15', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part16', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.094080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416', 'scsi-SQEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416-part1', 'scsi-SQEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416-part14', 'scsi-SQEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416-part15', 'scsi-SQEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416-part16', 'scsi-SQEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.094087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a4c19265--6381--5c6d--bd77--cfabc91aafa2-osd--block--a4c19265--6381--5c6d--bd77--cfabc91aafa2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TQ67LM-0DeH-KRgC-shpu-YHah-KL2O-nFSj7t', 'scsi-0QEMU_QEMU_HARDDISK_4c7a21bb-a31e-489d-9d8e-9d9677eceddb', 'scsi-SQEMU_QEMU_HARDDISK_4c7a21bb-a31e-489d-9d8e-9d9677eceddb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.094092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.094098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b12c38cd--5c6b--5ee1--93c6--dbb5afb60591-osd--block--b12c38cd--5c6b--5ee1--93c6--dbb5afb60591'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-otIJoe-qyBJ-4yKo-skrf-cWsC-MqoW-XYudVU', 'scsi-0QEMU_QEMU_HARDDISK_a2504ec8-b4e2-4484-b80f-e6a3be658c85', 'scsi-SQEMU_QEMU_HARDDISK_a2504ec8-b4e2-4484-b80f-e6a3be658c85'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.094102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f77776fc-d93a-4a3c-99b5-8bf2997ddc70', 'scsi-SQEMU_QEMU_HARDDISK_f77776fc-d93a-4a3c-99b5-8bf2997ddc70'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.094110 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.094116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.094121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094138 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.094142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286', 'scsi-SQEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286-part1', 'scsi-SQEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286-part14', 'scsi-SQEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286-part15', 'scsi-SQEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286-part16', 'scsi-SQEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.094182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.094189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094194 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.094198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:30:47.094434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7', 'scsi-SQEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7-part1', 'scsi-SQEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7-part14', 'scsi-SQEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7-part15', 'scsi-SQEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7-part16', 'scsi-SQEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.094446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:30:47.094451 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.094455 | orchestrator | 2025-08-29 17:30:47.094460 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-08-29 17:30:47.094466 | orchestrator | Friday 29 August 2025 17:19:41 +0000 (0:00:01.803) 0:00:35.616 ********* 2025-08-29 17:30:47.094471 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7cc16d54--75e9--5c21--b21a--878ce6efb3d6-osd--block--7cc16d54--75e9--5c21--b21a--878ce6efb3d6', 'dm-uuid-LVM-mGBVEzB4gcKM39Xx0aLE22bZ3zyymiRPX0QySadYR5ZdqE0ySp3sIrwpHhhyJTyJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094476 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53dd44b5--7849--5101--9e2a--fd90ac927c8f-osd--block--53dd44b5--7849--5101--9e2a--fd90ac927c8f', 'dm-uuid-LVM-tZ5rsxduFMnqzvdTHnqQLWccJnZQYchZIAFT1e5WnYYXln1r877aX72JY4ISW52H'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094483 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b00dade2--f82b--53af--89a3--8c9250354ec6-osd--block--b00dade2--f82b--53af--89a3--8c9250354ec6', 'dm-uuid-LVM-VY8oO4jmapOTYN6w4zG3PV2M2NyDnMLhL8j5KdTyG1xSlCfRvP3XmgHoqydm0inH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094629 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8088253a--7e26--529d--8fdb--0f472c9bb5d3-osd--block--8088253a--7e26--529d--8fdb--0f472c9bb5d3', 'dm-uuid-LVM-FVeBMUPoAV9RcTz0ycqRnP1EtAKr6OFuAEc2nlv4hoplmaxvoQG2BgcNFvR0LK8g'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094638 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094643 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094647 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094652 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094658 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094665 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094669 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094677 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094681 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094685 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094690 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094699 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a4c19265--6381--5c6d--bd77--cfabc91aafa2-osd--block--a4c19265--6381--5c6d--bd77--cfabc91aafa2', 'dm-uuid-LVM-QSwdfYrrHmm7V51x7PzoPflTmwgQDmNf0ILcNWcv6jcDDetm7KKU0VlRyvTcFcbc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094703 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b12c38cd--5c6b--5ee1--93c6--dbb5afb60591-osd--block--b12c38cd--5c6b--5ee1--93c6--dbb5afb60591', 'dm-uuid-LVM-i9XIynlVD4XQHui1DTZfZNm2dtjd80d66kxMgcfPpxXv56ULLy5Z5x7FNHv6aseG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094710 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094714 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094719 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094723 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094735 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part1', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part14', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part15', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part16', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b00dade2--f82b--53af--89a3--8c9250354ec6-osd--block--b00dade2--f82b--53af--89a3--8c9250354ec6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Dkm75k-dgyQ-fHCc-DskW-R3kq-Avrn-VOvlhd', 'scsi-0QEMU_QEMU_HARDDISK_82706033-d7aa-4ff6-a1c3-b9e917369a8a', 'scsi-SQEMU_QEMU_HARDDISK_82706033-d7aa-4ff6-a1c3-b9e917369a8a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094746 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094756 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8088253a--7e26--529d--8fdb--0f472c9bb5d3-osd--block--8088253a--7e26--529d--8fdb--0f472c9bb5d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fo9JB4-Jkhr-Vfiv-ZF1s-HkrB-0RZo-O0eTPw', 'scsi-0QEMU_QEMU_HARDDISK_2fca4086-decf-4125-8890-d41999d174b7', 'scsi-SQEMU_QEMU_HARDDISK_2fca4086-decf-4125-8890-d41999d174b7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094760 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094765 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094772 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094777 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094781 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95e03a76-aedc-4db5-a6e6-17eb5f40fbcd', 'scsi-SQEMU_QEMU_HARDDISK_95e03a76-aedc-4db5-a6e6-17eb5f40fbcd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094791 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094795 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094800 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094807 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094811 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094818 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part1', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part14', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part15', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part16', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094828 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a4c19265--6381--5c6d--bd77--cfabc91aafa2-osd--block--a4c19265--6381--5c6d--bd77--cfabc91aafa2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TQ67LM-0DeH-KRgC-shpu-YHah-KL2O-nFSj7t', 'scsi-0QEMU_QEMU_HARDDISK_4c7a21bb-a31e-489d-9d8e-9d9677eceddb', 'scsi-SQEMU_QEMU_HARDDISK_4c7a21bb-a31e-489d-9d8e-9d9677eceddb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094833 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b12c38cd--5c6b--5ee1--93c6--dbb5afb60591-osd--block--b12c38cd--5c6b--5ee1--93c6--dbb5afb60591'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-otIJoe-qyBJ-4yKo-skrf-cWsC-MqoW-XYudVU', 'scsi-0QEMU_QEMU_HARDDISK_a2504ec8-b4e2-4484-b80f-e6a3be658c85', 'scsi-SQEMU_QEMU_HARDDISK_a2504ec8-b4e2-4484-b80f-e6a3be658c85'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094840 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part1', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part14', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part15', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part16', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094847 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094854 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f77776fc-d93a-4a3c-99b5-8bf2997ddc70', 'scsi-SQEMU_QEMU_HARDDISK_f77776fc-d93a-4a3c-99b5-8bf2997ddc70'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094859 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094865 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094872 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7cc16d54--75e9--5c21--b21a--878ce6efb3d6-osd--block--7cc16d54--75e9--5c21--b21a--878ce6efb3d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yYjIpl-QKq5-SuzB-H6iQ-dv9d-fL3r-4wbiAB', 'scsi-0QEMU_QEMU_HARDDISK_fde2b913-b6a3-4677-89e6-f2f4f6c968dc', 'scsi-SQEMU_QEMU_HARDDISK_fde2b913-b6a3-4677-89e6-f2f4f6c968dc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094876 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094883 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094890 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094896 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--53dd44b5--7849--5101--9e2a--fd90ac927c8f-osd--block--53dd44b5--7849--5101--9e2a--fd90ac927c8f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PG2lDJ-9inP-j75c-Ibws-o2zw-l7L3-HOqOBz', 'scsi-0QEMU_QEMU_HARDDISK_d9d2a167-7162-4dbb-9ae3-4acc9c24be60', 'scsi-SQEMU_QEMU_HARDDISK_d9d2a167-7162-4dbb-9ae3-4acc9c24be60'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094904 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094910 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094915 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094921 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094925 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.094929 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.094933 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_822f3ffb-f416-492b-baa5-9c709b0e03df', 'scsi-SQEMU_QEMU_HARDDISK_822f3ffb-f416-492b-baa5-9c709b0e03df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094969 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094974 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094981 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094985 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094991 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.094996 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095003 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.095009 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416', 'scsi-SQEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416-part1', 'scsi-SQEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416-part14', 'scsi-SQEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416-part15', 'scsi-SQEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416-part16', 'scsi-SQEMU_QEMU_HARDDISK_16b144c0-621b-45de-9e0b-661fe8ca3416-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095014 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095021 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095025 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.095029 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095040 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286', 'scsi-SQEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286-part1', 'scsi-SQEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286-part14', 'scsi-SQEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286-part15', 'scsi-SQEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286-part16', 'scsi-SQEMU_QEMU_HARDDISK_a4dbedbb-5d3a-43c0-9006-369b69148286-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095045 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095049 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.095055 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095062 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095067 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095073 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095077 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095081 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095087 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095094 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095101 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7', 'scsi-SQEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7-part1', 'scsi-SQEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7-part14', 'scsi-SQEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7-part15', 'scsi-SQEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7-part16', 'scsi-SQEMU_QEMU_HARDDISK_25b9d25e-6018-4201-98d7-97beb0a1ade7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095105 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:30:47.095110 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.095113 | orchestrator | 2025-08-29 17:30:47.095118 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-08-29 17:30:47.095124 | orchestrator | Friday 29 August 2025 17:19:44 +0000 (0:00:03.004) 0:00:38.620 ********* 2025-08-29 17:30:47.095131 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.095135 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.095139 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.095143 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.095147 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.095150 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.095154 | orchestrator | 2025-08-29 17:30:47.095158 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-08-29 17:30:47.095162 | orchestrator | Friday 29 August 2025 17:19:46 +0000 (0:00:01.684) 0:00:40.305 ********* 2025-08-29 17:30:47.095166 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.095170 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.095174 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.095179 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.095185 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.095191 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.095194 | orchestrator | 2025-08-29 17:30:47.095198 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 17:30:47.095203 | orchestrator | Friday 29 August 2025 17:19:47 +0000 (0:00:01.109) 0:00:41.415 ********* 2025-08-29 17:30:47.095206 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.095210 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.095214 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.095218 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.095222 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.095226 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.095230 | orchestrator | 2025-08-29 17:30:47.095234 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 17:30:47.095238 | orchestrator | Friday 29 August 2025 17:19:49 +0000 (0:00:01.334) 0:00:42.749 ********* 2025-08-29 17:30:47.095241 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.095245 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.095249 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.095253 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.095257 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.095261 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.095265 | orchestrator | 2025-08-29 17:30:47.095269 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 17:30:47.095274 | orchestrator | Friday 29 August 2025 17:19:49 +0000 (0:00:00.794) 0:00:43.544 ********* 2025-08-29 17:30:47.095279 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.095283 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.095287 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.095292 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.095296 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.095301 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.095305 | orchestrator | 2025-08-29 17:30:47.095310 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 17:30:47.095314 | orchestrator | Friday 29 August 2025 17:19:51 +0000 (0:00:01.542) 0:00:45.087 ********* 2025-08-29 17:30:47.095319 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.095324 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.095329 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.095336 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.095341 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.095355 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.095360 | orchestrator | 2025-08-29 17:30:47.095364 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-08-29 17:30:47.095369 | orchestrator | Friday 29 August 2025 17:19:52 +0000 (0:00:01.414) 0:00:46.501 ********* 2025-08-29 17:30:47.095373 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-08-29 17:30:47.095378 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-08-29 17:30:47.095387 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-08-29 17:30:47.095392 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-08-29 17:30:47.095397 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-08-29 17:30:47.095401 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-08-29 17:30:47.095406 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 17:30:47.095410 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-08-29 17:30:47.095415 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-08-29 17:30:47.095419 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-08-29 17:30:47.095424 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 17:30:47.095428 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-08-29 17:30:47.095433 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-08-29 17:30:47.095437 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 17:30:47.095442 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-08-29 17:30:47.095446 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-08-29 17:30:47.095450 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-08-29 17:30:47.095455 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-08-29 17:30:47.095459 | orchestrator | 2025-08-29 17:30:47.095487 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-08-29 17:30:47.095494 | orchestrator | Friday 29 August 2025 17:19:58 +0000 (0:00:05.817) 0:00:52.318 ********* 2025-08-29 17:30:47.095498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 17:30:47.095503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 17:30:47.095507 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 17:30:47.095512 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.095516 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 17:30:47.095521 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 17:30:47.095525 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 17:30:47.095529 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.095534 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 17:30:47.095539 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 17:30:47.095899 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 17:30:47.095931 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.095936 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 17:30:47.095940 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 17:30:47.095944 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 17:30:47.095948 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.095952 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-08-29 17:30:47.095956 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-08-29 17:30:47.095960 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-08-29 17:30:47.095964 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.095968 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-08-29 17:30:47.095993 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-08-29 17:30:47.095998 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-08-29 17:30:47.096002 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.096006 | orchestrator | 2025-08-29 17:30:47.096010 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-08-29 17:30:47.096015 | orchestrator | Friday 29 August 2025 17:20:01 +0000 (0:00:02.511) 0:00:54.830 ********* 2025-08-29 17:30:47.096018 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.096022 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.096031 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.096035 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.096039 | orchestrator | 2025-08-29 17:30:47.096044 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 17:30:47.096048 | orchestrator | Friday 29 August 2025 17:20:02 +0000 (0:00:01.267) 0:00:56.098 ********* 2025-08-29 17:30:47.096052 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.096056 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.096060 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.096064 | orchestrator | 2025-08-29 17:30:47.096083 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 17:30:47.096088 | orchestrator | Friday 29 August 2025 17:20:02 +0000 (0:00:00.462) 0:00:56.561 ********* 2025-08-29 17:30:47.096092 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.096461 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.096475 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.096479 | orchestrator | 2025-08-29 17:30:47.096484 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 17:30:47.096488 | orchestrator | Friday 29 August 2025 17:20:03 +0000 (0:00:00.697) 0:00:57.259 ********* 2025-08-29 17:30:47.096492 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.096496 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.096500 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.096504 | orchestrator | 2025-08-29 17:30:47.096507 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 17:30:47.096511 | orchestrator | Friday 29 August 2025 17:20:04 +0000 (0:00:00.404) 0:00:57.664 ********* 2025-08-29 17:30:47.096515 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.096519 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.096523 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.096527 | orchestrator | 2025-08-29 17:30:47.096531 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 17:30:47.096538 | orchestrator | Friday 29 August 2025 17:20:05 +0000 (0:00:01.190) 0:00:58.854 ********* 2025-08-29 17:30:47.096542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:30:47.096546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:30:47.096550 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:30:47.096554 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.096557 | orchestrator | 2025-08-29 17:30:47.096561 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 17:30:47.096565 | orchestrator | Friday 29 August 2025 17:20:05 +0000 (0:00:00.500) 0:00:59.355 ********* 2025-08-29 17:30:47.096569 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:30:47.096573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:30:47.096577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:30:47.096581 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.096585 | orchestrator | 2025-08-29 17:30:47.096589 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 17:30:47.096593 | orchestrator | Friday 29 August 2025 17:20:06 +0000 (0:00:00.579) 0:00:59.934 ********* 2025-08-29 17:30:47.096597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:30:47.096600 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:30:47.096604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:30:47.096608 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.096612 | orchestrator | 2025-08-29 17:30:47.096616 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 17:30:47.096620 | orchestrator | Friday 29 August 2025 17:20:06 +0000 (0:00:00.425) 0:01:00.360 ********* 2025-08-29 17:30:47.096629 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.096633 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.096637 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.096641 | orchestrator | 2025-08-29 17:30:47.096645 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 17:30:47.096649 | orchestrator | Friday 29 August 2025 17:20:07 +0000 (0:00:00.517) 0:01:00.877 ********* 2025-08-29 17:30:47.096652 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 17:30:47.096656 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 17:30:47.096660 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 17:30:47.096664 | orchestrator | 2025-08-29 17:30:47.096668 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-08-29 17:30:47.096672 | orchestrator | Friday 29 August 2025 17:20:08 +0000 (0:00:01.176) 0:01:02.053 ********* 2025-08-29 17:30:47.096676 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 17:30:47.096680 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 17:30:47.096684 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 17:30:47.096688 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 17:30:47.096692 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 17:30:47.096696 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 17:30:47.096700 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 17:30:47.096925 | orchestrator | 2025-08-29 17:30:47.096929 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-08-29 17:30:47.096933 | orchestrator | Friday 29 August 2025 17:20:09 +0000 (0:00:01.318) 0:01:03.372 ********* 2025-08-29 17:30:47.096937 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 17:30:47.096941 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 17:30:47.096945 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 17:30:47.096949 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 17:30:47.096953 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 17:30:47.096957 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 17:30:47.096961 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 17:30:47.096965 | orchestrator | 2025-08-29 17:30:47.096969 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 17:30:47.096987 | orchestrator | Friday 29 August 2025 17:20:12 +0000 (0:00:02.633) 0:01:06.005 ********* 2025-08-29 17:30:47.096992 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.096997 | orchestrator | 2025-08-29 17:30:47.097001 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 17:30:47.097005 | orchestrator | Friday 29 August 2025 17:20:14 +0000 (0:00:01.923) 0:01:07.928 ********* 2025-08-29 17:30:47.097009 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.097013 | orchestrator | 2025-08-29 17:30:47.097017 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 17:30:47.097021 | orchestrator | Friday 29 August 2025 17:20:15 +0000 (0:00:01.265) 0:01:09.194 ********* 2025-08-29 17:30:47.097025 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.097029 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.097040 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.097044 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.097048 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.097052 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.097056 | orchestrator | 2025-08-29 17:30:47.097060 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 17:30:47.097064 | orchestrator | Friday 29 August 2025 17:20:17 +0000 (0:00:01.657) 0:01:10.852 ********* 2025-08-29 17:30:47.097068 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.097072 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.097076 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.097079 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.097083 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.097087 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.097091 | orchestrator | 2025-08-29 17:30:47.097095 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 17:30:47.097099 | orchestrator | Friday 29 August 2025 17:20:19 +0000 (0:00:01.950) 0:01:12.803 ********* 2025-08-29 17:30:47.097103 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.097107 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.097111 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.097115 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.097118 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.097122 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.097126 | orchestrator | 2025-08-29 17:30:47.097132 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 17:30:47.097138 | orchestrator | Friday 29 August 2025 17:20:21 +0000 (0:00:01.928) 0:01:14.732 ********* 2025-08-29 17:30:47.097144 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.097148 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.097152 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.097156 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.097159 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.097163 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.097167 | orchestrator | 2025-08-29 17:30:47.097171 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 17:30:47.097175 | orchestrator | Friday 29 August 2025 17:20:22 +0000 (0:00:01.459) 0:01:16.192 ********* 2025-08-29 17:30:47.097179 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.097183 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.097187 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.097190 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.097194 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.097198 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.097202 | orchestrator | 2025-08-29 17:30:47.097206 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 17:30:47.097210 | orchestrator | Friday 29 August 2025 17:20:24 +0000 (0:00:01.967) 0:01:18.159 ********* 2025-08-29 17:30:47.097214 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.097218 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.097221 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.097225 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.097229 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.097233 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.097237 | orchestrator | 2025-08-29 17:30:47.097241 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 17:30:47.097245 | orchestrator | Friday 29 August 2025 17:20:25 +0000 (0:00:00.976) 0:01:19.135 ********* 2025-08-29 17:30:47.097249 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.097252 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.097256 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.097260 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.097264 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.097268 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.097272 | orchestrator | 2025-08-29 17:30:47.097279 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 17:30:47.097283 | orchestrator | Friday 29 August 2025 17:20:26 +0000 (0:00:01.113) 0:01:20.249 ********* 2025-08-29 17:30:47.097287 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.097290 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.097294 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.097298 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.097330 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.097336 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.097340 | orchestrator | 2025-08-29 17:30:47.097388 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 17:30:47.097394 | orchestrator | Friday 29 August 2025 17:20:29 +0000 (0:00:02.466) 0:01:22.716 ********* 2025-08-29 17:30:47.097398 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.097402 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.097405 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.097409 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.097413 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.097417 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.097421 | orchestrator | 2025-08-29 17:30:47.097425 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 17:30:47.097440 | orchestrator | Friday 29 August 2025 17:20:30 +0000 (0:00:01.550) 0:01:24.266 ********* 2025-08-29 17:30:47.097445 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.097449 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.097453 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.097457 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.097461 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.097464 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.097468 | orchestrator | 2025-08-29 17:30:47.097472 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 17:30:47.097476 | orchestrator | Friday 29 August 2025 17:20:32 +0000 (0:00:01.538) 0:01:25.804 ********* 2025-08-29 17:30:47.097480 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.097484 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.097487 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.097491 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.097495 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.097499 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.097503 | orchestrator | 2025-08-29 17:30:47.097507 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 17:30:47.097511 | orchestrator | Friday 29 August 2025 17:20:33 +0000 (0:00:00.970) 0:01:26.775 ********* 2025-08-29 17:30:47.097515 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.097519 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.097522 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.097526 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.097530 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.097534 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.097538 | orchestrator | 2025-08-29 17:30:47.097542 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 17:30:47.097546 | orchestrator | Friday 29 August 2025 17:20:34 +0000 (0:00:01.319) 0:01:28.094 ********* 2025-08-29 17:30:47.097550 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.097554 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.097557 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.097573 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.097577 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.097581 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.097585 | orchestrator | 2025-08-29 17:30:47.097589 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 17:30:47.097593 | orchestrator | Friday 29 August 2025 17:20:35 +0000 (0:00:00.981) 0:01:29.076 ********* 2025-08-29 17:30:47.097597 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.097601 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.097621 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.097626 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.097630 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.097634 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.097638 | orchestrator | 2025-08-29 17:30:47.097641 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 17:30:47.097645 | orchestrator | Friday 29 August 2025 17:20:36 +0000 (0:00:01.088) 0:01:30.164 ********* 2025-08-29 17:30:47.097650 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.097653 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.097657 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.097661 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.097664 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.097668 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.097672 | orchestrator | 2025-08-29 17:30:47.097676 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 17:30:47.097679 | orchestrator | Friday 29 August 2025 17:20:37 +0000 (0:00:00.735) 0:01:30.900 ********* 2025-08-29 17:30:47.097683 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.097687 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.097691 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.097694 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.097698 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.097702 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.097705 | orchestrator | 2025-08-29 17:30:47.097709 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 17:30:47.097713 | orchestrator | Friday 29 August 2025 17:20:38 +0000 (0:00:00.897) 0:01:31.797 ********* 2025-08-29 17:30:47.097717 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.097720 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.097724 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.097728 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.097732 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.097735 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.097739 | orchestrator | 2025-08-29 17:30:47.097743 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 17:30:47.097747 | orchestrator | Friday 29 August 2025 17:20:38 +0000 (0:00:00.694) 0:01:32.491 ********* 2025-08-29 17:30:47.097750 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.097754 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.097758 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.097762 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.097765 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.097769 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.097773 | orchestrator | 2025-08-29 17:30:47.097776 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 17:30:47.097780 | orchestrator | Friday 29 August 2025 17:20:40 +0000 (0:00:01.226) 0:01:33.717 ********* 2025-08-29 17:30:47.097784 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.097788 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.097791 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.097795 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.097799 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.097802 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.097806 | orchestrator | 2025-08-29 17:30:47.097810 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-08-29 17:30:47.097813 | orchestrator | Friday 29 August 2025 17:20:41 +0000 (0:00:01.612) 0:01:35.330 ********* 2025-08-29 17:30:47.097817 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.097821 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.097824 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.097828 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.097832 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.097836 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.097843 | orchestrator | 2025-08-29 17:30:47.097847 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-08-29 17:30:47.097863 | orchestrator | Friday 29 August 2025 17:20:43 +0000 (0:00:01.492) 0:01:36.822 ********* 2025-08-29 17:30:47.097867 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.097872 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.097879 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.097883 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.097887 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.097891 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.097894 | orchestrator | 2025-08-29 17:30:47.097898 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-08-29 17:30:47.097902 | orchestrator | Friday 29 August 2025 17:20:46 +0000 (0:00:03.158) 0:01:39.981 ********* 2025-08-29 17:30:47.097906 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.097910 | orchestrator | 2025-08-29 17:30:47.097914 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-08-29 17:30:47.097917 | orchestrator | Friday 29 August 2025 17:20:47 +0000 (0:00:01.172) 0:01:41.153 ********* 2025-08-29 17:30:47.097921 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.097927 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.097931 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.097934 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.097938 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.097942 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.097946 | orchestrator | 2025-08-29 17:30:47.097949 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-08-29 17:30:47.097953 | orchestrator | Friday 29 August 2025 17:20:48 +0000 (0:00:00.620) 0:01:41.774 ********* 2025-08-29 17:30:47.097957 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.097961 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.097964 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.097968 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.097972 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.097975 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.097979 | orchestrator | 2025-08-29 17:30:47.097983 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-08-29 17:30:47.097987 | orchestrator | Friday 29 August 2025 17:20:48 +0000 (0:00:00.796) 0:01:42.570 ********* 2025-08-29 17:30:47.097990 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 17:30:47.097994 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 17:30:47.097998 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 17:30:47.098002 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 17:30:47.098005 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 17:30:47.098009 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 17:30:47.098035 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 17:30:47.098040 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 17:30:47.098045 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 17:30:47.098049 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 17:30:47.098053 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 17:30:47.098058 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 17:30:47.098065 | orchestrator | 2025-08-29 17:30:47.098069 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-08-29 17:30:47.098074 | orchestrator | Friday 29 August 2025 17:20:50 +0000 (0:00:01.308) 0:01:43.879 ********* 2025-08-29 17:30:47.098078 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.098082 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.098087 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.098091 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.098095 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.098099 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.098104 | orchestrator | 2025-08-29 17:30:47.098108 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-08-29 17:30:47.098112 | orchestrator | Friday 29 August 2025 17:20:51 +0000 (0:00:01.161) 0:01:45.040 ********* 2025-08-29 17:30:47.098116 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098121 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.098125 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.098129 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.098133 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.098137 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.098141 | orchestrator | 2025-08-29 17:30:47.098146 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-08-29 17:30:47.098150 | orchestrator | Friday 29 August 2025 17:20:51 +0000 (0:00:00.607) 0:01:45.648 ********* 2025-08-29 17:30:47.098154 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098158 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.098163 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.098167 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.098171 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.098175 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.098179 | orchestrator | 2025-08-29 17:30:47.098183 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-08-29 17:30:47.098187 | orchestrator | Friday 29 August 2025 17:20:52 +0000 (0:00:00.810) 0:01:46.459 ********* 2025-08-29 17:30:47.098192 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098196 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.098200 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.098204 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.098221 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.098226 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.098230 | orchestrator | 2025-08-29 17:30:47.098234 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-08-29 17:30:47.098238 | orchestrator | Friday 29 August 2025 17:20:53 +0000 (0:00:00.590) 0:01:47.049 ********* 2025-08-29 17:30:47.098243 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.098247 | orchestrator | 2025-08-29 17:30:47.098251 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-08-29 17:30:47.098255 | orchestrator | Friday 29 August 2025 17:20:54 +0000 (0:00:01.206) 0:01:48.256 ********* 2025-08-29 17:30:47.098259 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.098264 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.098268 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.098272 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.098276 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.098281 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.098285 | orchestrator | 2025-08-29 17:30:47.098291 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-08-29 17:30:47.098296 | orchestrator | Friday 29 August 2025 17:22:08 +0000 (0:01:13.588) 0:03:01.845 ********* 2025-08-29 17:30:47.098300 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 17:30:47.098304 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 17:30:47.098313 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 17:30:47.098318 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098322 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 17:30:47.098326 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 17:30:47.098331 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 17:30:47.098339 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.098352 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 17:30:47.098356 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 17:30:47.098360 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 17:30:47.098365 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.098369 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 17:30:47.098373 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 17:30:47.098378 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 17:30:47.098382 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.098386 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 17:30:47.098390 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 17:30:47.098395 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 17:30:47.098399 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.098404 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 17:30:47.098408 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 17:30:47.098412 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 17:30:47.098417 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.098421 | orchestrator | 2025-08-29 17:30:47.098425 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-08-29 17:30:47.098429 | orchestrator | Friday 29 August 2025 17:22:09 +0000 (0:00:00.913) 0:03:02.758 ********* 2025-08-29 17:30:47.098432 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098436 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.098440 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.098443 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.098447 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.098451 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.098454 | orchestrator | 2025-08-29 17:30:47.098458 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-08-29 17:30:47.098462 | orchestrator | Friday 29 August 2025 17:22:10 +0000 (0:00:01.023) 0:03:03.782 ********* 2025-08-29 17:30:47.098466 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098469 | orchestrator | 2025-08-29 17:30:47.098473 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-08-29 17:30:47.098477 | orchestrator | Friday 29 August 2025 17:22:10 +0000 (0:00:00.251) 0:03:04.033 ********* 2025-08-29 17:30:47.098481 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098484 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.098488 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.098492 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.098495 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.098499 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.098503 | orchestrator | 2025-08-29 17:30:47.098506 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-08-29 17:30:47.098510 | orchestrator | Friday 29 August 2025 17:22:11 +0000 (0:00:00.867) 0:03:04.900 ********* 2025-08-29 17:30:47.098519 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098524 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.098528 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.098532 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.098535 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.098539 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.098543 | orchestrator | 2025-08-29 17:30:47.098558 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-08-29 17:30:47.098562 | orchestrator | Friday 29 August 2025 17:22:12 +0000 (0:00:00.968) 0:03:05.869 ********* 2025-08-29 17:30:47.098566 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098569 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.098573 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.098577 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.098581 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.098584 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.098588 | orchestrator | 2025-08-29 17:30:47.098592 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-08-29 17:30:47.098595 | orchestrator | Friday 29 August 2025 17:22:12 +0000 (0:00:00.671) 0:03:06.541 ********* 2025-08-29 17:30:47.098599 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.098603 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.098607 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.098610 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.098614 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.098618 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.098622 | orchestrator | 2025-08-29 17:30:47.098625 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-08-29 17:30:47.098631 | orchestrator | Friday 29 August 2025 17:22:15 +0000 (0:00:02.660) 0:03:09.201 ********* 2025-08-29 17:30:47.098635 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.098639 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.098642 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.098646 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.098650 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.098654 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.098657 | orchestrator | 2025-08-29 17:30:47.098661 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-08-29 17:30:47.098665 | orchestrator | Friday 29 August 2025 17:22:16 +0000 (0:00:00.870) 0:03:10.071 ********* 2025-08-29 17:30:47.098669 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.098673 | orchestrator | 2025-08-29 17:30:47.098677 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-08-29 17:30:47.098680 | orchestrator | Friday 29 August 2025 17:22:17 +0000 (0:00:01.194) 0:03:11.265 ********* 2025-08-29 17:30:47.098684 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098688 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.098691 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.098695 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.098699 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.098703 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.098706 | orchestrator | 2025-08-29 17:30:47.098710 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-08-29 17:30:47.098714 | orchestrator | Friday 29 August 2025 17:22:18 +0000 (0:00:00.628) 0:03:11.894 ********* 2025-08-29 17:30:47.098717 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098721 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.098725 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.098728 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.098732 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.098736 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.098739 | orchestrator | 2025-08-29 17:30:47.098746 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-08-29 17:30:47.098750 | orchestrator | Friday 29 August 2025 17:22:19 +0000 (0:00:00.842) 0:03:12.737 ********* 2025-08-29 17:30:47.098753 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098757 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.098761 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.098764 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.098768 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.098772 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.098775 | orchestrator | 2025-08-29 17:30:47.098779 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-08-29 17:30:47.098783 | orchestrator | Friday 29 August 2025 17:22:19 +0000 (0:00:00.621) 0:03:13.358 ********* 2025-08-29 17:30:47.098787 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098790 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.098794 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.098798 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.098801 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.098806 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.098812 | orchestrator | 2025-08-29 17:30:47.098816 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-08-29 17:30:47.098820 | orchestrator | Friday 29 August 2025 17:22:20 +0000 (0:00:01.120) 0:03:14.478 ********* 2025-08-29 17:30:47.098824 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098827 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.098831 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.098835 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.098838 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.098842 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.098846 | orchestrator | 2025-08-29 17:30:47.098849 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-08-29 17:30:47.098853 | orchestrator | Friday 29 August 2025 17:22:21 +0000 (0:00:00.794) 0:03:15.272 ********* 2025-08-29 17:30:47.098857 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098860 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.098864 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.098868 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.098871 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.098875 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.098879 | orchestrator | 2025-08-29 17:30:47.098882 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-08-29 17:30:47.098886 | orchestrator | Friday 29 August 2025 17:22:22 +0000 (0:00:01.252) 0:03:16.525 ********* 2025-08-29 17:30:47.098890 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098893 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.098897 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.098901 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.098904 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.098918 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.098922 | orchestrator | 2025-08-29 17:30:47.098926 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-08-29 17:30:47.098930 | orchestrator | Friday 29 August 2025 17:22:23 +0000 (0:00:00.792) 0:03:17.318 ********* 2025-08-29 17:30:47.098934 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.098938 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.098941 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.098945 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.098949 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.098953 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.098956 | orchestrator | 2025-08-29 17:30:47.098960 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-08-29 17:30:47.098964 | orchestrator | Friday 29 August 2025 17:22:24 +0000 (0:00:00.843) 0:03:18.161 ********* 2025-08-29 17:30:47.098972 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.098976 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.098980 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.098983 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.098987 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.098991 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.098995 | orchestrator | 2025-08-29 17:30:47.099001 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-08-29 17:30:47.099005 | orchestrator | Friday 29 August 2025 17:22:25 +0000 (0:00:01.220) 0:03:19.382 ********* 2025-08-29 17:30:47.099009 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-0, testbed-node-5, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.099012 | orchestrator | 2025-08-29 17:30:47.099016 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-08-29 17:30:47.099020 | orchestrator | Friday 29 August 2025 17:22:26 +0000 (0:00:01.189) 0:03:20.572 ********* 2025-08-29 17:30:47.099024 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-08-29 17:30:47.099028 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-08-29 17:30:47.099031 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-08-29 17:30:47.099035 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-08-29 17:30:47.099039 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-08-29 17:30:47.099043 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-08-29 17:30:47.099046 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-08-29 17:30:47.099050 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-08-29 17:30:47.099054 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-08-29 17:30:47.099058 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-08-29 17:30:47.099061 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-08-29 17:30:47.099065 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-08-29 17:30:47.099069 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-08-29 17:30:47.099072 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-08-29 17:30:47.099076 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-08-29 17:30:47.099080 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-08-29 17:30:47.099084 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-08-29 17:30:47.099087 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-08-29 17:30:47.099091 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-08-29 17:30:47.099095 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-08-29 17:30:47.099098 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-08-29 17:30:47.099102 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-08-29 17:30:47.099106 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-08-29 17:30:47.099110 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-08-29 17:30:47.099113 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-08-29 17:30:47.099117 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-08-29 17:30:47.099121 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-08-29 17:30:47.099125 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-08-29 17:30:47.099128 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-08-29 17:30:47.099132 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-08-29 17:30:47.099136 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-08-29 17:30:47.099139 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-08-29 17:30:47.099143 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-08-29 17:30:47.099150 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-08-29 17:30:47.099153 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-08-29 17:30:47.099157 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-08-29 17:30:47.099161 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-08-29 17:30:47.099165 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-08-29 17:30:47.099168 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-08-29 17:30:47.099172 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-08-29 17:30:47.099176 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-08-29 17:30:47.099180 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 17:30:47.099183 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-08-29 17:30:47.099187 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-08-29 17:30:47.099201 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-08-29 17:30:47.099206 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-08-29 17:30:47.099209 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-08-29 17:30:47.099213 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 17:30:47.099217 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-08-29 17:30:47.099221 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-08-29 17:30:47.099224 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 17:30:47.099228 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 17:30:47.099232 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 17:30:47.099235 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 17:30:47.099239 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 17:30:47.099245 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 17:30:47.099248 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 17:30:47.099252 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 17:30:47.099256 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 17:30:47.099260 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 17:30:47.099263 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 17:30:47.099267 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 17:30:47.099271 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 17:30:47.099274 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 17:30:47.099278 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 17:30:47.099282 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 17:30:47.099285 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 17:30:47.099289 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 17:30:47.099293 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 17:30:47.099296 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 17:30:47.099300 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 17:30:47.099304 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 17:30:47.099307 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 17:30:47.099314 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 17:30:47.099317 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 17:30:47.099321 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-08-29 17:30:47.099325 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 17:30:47.099328 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 17:30:47.099332 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 17:30:47.099336 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 17:30:47.099340 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-08-29 17:30:47.099352 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 17:30:47.099356 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 17:30:47.099360 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 17:30:47.099363 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-08-29 17:30:47.099367 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 17:30:47.099371 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 17:30:47.099375 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-08-29 17:30:47.099378 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-08-29 17:30:47.099382 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-08-29 17:30:47.099386 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-08-29 17:30:47.099390 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-08-29 17:30:47.099393 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-08-29 17:30:47.099397 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-08-29 17:30:47.099401 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-08-29 17:30:47.099404 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-08-29 17:30:47.099408 | orchestrator | 2025-08-29 17:30:47.099412 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-08-29 17:30:47.099416 | orchestrator | Friday 29 August 2025 17:22:33 +0000 (0:00:06.439) 0:03:27.011 ********* 2025-08-29 17:30:47.099420 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.099423 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.099427 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.099442 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.099446 | orchestrator | 2025-08-29 17:30:47.099450 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-08-29 17:30:47.099454 | orchestrator | Friday 29 August 2025 17:22:34 +0000 (0:00:01.526) 0:03:28.537 ********* 2025-08-29 17:30:47.099458 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 17:30:47.099462 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 17:30:47.099466 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 17:30:47.099470 | orchestrator | 2025-08-29 17:30:47.099473 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-08-29 17:30:47.099477 | orchestrator | Friday 29 August 2025 17:22:36 +0000 (0:00:01.252) 0:03:29.790 ********* 2025-08-29 17:30:47.099483 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 17:30:47.099487 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 17:30:47.099493 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 17:30:47.099497 | orchestrator | 2025-08-29 17:30:47.099501 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-08-29 17:30:47.099505 | orchestrator | Friday 29 August 2025 17:22:37 +0000 (0:00:01.438) 0:03:31.228 ********* 2025-08-29 17:30:47.099508 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.099512 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.099516 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.099520 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.099524 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.099527 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.099531 | orchestrator | 2025-08-29 17:30:47.099535 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-08-29 17:30:47.099539 | orchestrator | Friday 29 August 2025 17:22:38 +0000 (0:00:00.907) 0:03:32.136 ********* 2025-08-29 17:30:47.099542 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.099546 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.099550 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.099554 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.099557 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.099561 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.099565 | orchestrator | 2025-08-29 17:30:47.099569 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-08-29 17:30:47.099572 | orchestrator | Friday 29 August 2025 17:22:39 +0000 (0:00:01.262) 0:03:33.398 ********* 2025-08-29 17:30:47.099576 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.099580 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.099584 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.099587 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.099591 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.099595 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.099599 | orchestrator | 2025-08-29 17:30:47.099602 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-08-29 17:30:47.099606 | orchestrator | Friday 29 August 2025 17:22:40 +0000 (0:00:00.671) 0:03:34.070 ********* 2025-08-29 17:30:47.099610 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.099613 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.099617 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.099621 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.099625 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.099628 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.099632 | orchestrator | 2025-08-29 17:30:47.099636 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-08-29 17:30:47.099640 | orchestrator | Friday 29 August 2025 17:22:41 +0000 (0:00:00.671) 0:03:34.742 ********* 2025-08-29 17:30:47.099643 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.099647 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.099651 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.099655 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.099658 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.099662 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.099666 | orchestrator | 2025-08-29 17:30:47.099669 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-08-29 17:30:47.099673 | orchestrator | Friday 29 August 2025 17:22:41 +0000 (0:00:00.797) 0:03:35.540 ********* 2025-08-29 17:30:47.099677 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.099681 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.099685 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.099688 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.099692 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.099698 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.099702 | orchestrator | 2025-08-29 17:30:47.099706 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-08-29 17:30:47.099709 | orchestrator | Friday 29 August 2025 17:22:42 +0000 (0:00:00.992) 0:03:36.533 ********* 2025-08-29 17:30:47.099713 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.099717 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.099721 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.099724 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.099728 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.099732 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.099735 | orchestrator | 2025-08-29 17:30:47.099739 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-08-29 17:30:47.099757 | orchestrator | Friday 29 August 2025 17:22:43 +0000 (0:00:01.016) 0:03:37.549 ********* 2025-08-29 17:30:47.099762 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.099765 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.099769 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.099773 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.099777 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.099780 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.099784 | orchestrator | 2025-08-29 17:30:47.099788 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-08-29 17:30:47.099792 | orchestrator | Friday 29 August 2025 17:22:44 +0000 (0:00:00.583) 0:03:38.133 ********* 2025-08-29 17:30:47.099796 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.099799 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.099803 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.099807 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.099810 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.099814 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.099818 | orchestrator | 2025-08-29 17:30:47.099822 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-08-29 17:30:47.099828 | orchestrator | Friday 29 August 2025 17:22:47 +0000 (0:00:02.812) 0:03:40.945 ********* 2025-08-29 17:30:47.099831 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.099835 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.099839 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.099843 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.099847 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.099850 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.099854 | orchestrator | 2025-08-29 17:30:47.099858 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-08-29 17:30:47.099862 | orchestrator | Friday 29 August 2025 17:22:47 +0000 (0:00:00.568) 0:03:41.514 ********* 2025-08-29 17:30:47.099866 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.099869 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.099873 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.099877 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.099881 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.099884 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.099888 | orchestrator | 2025-08-29 17:30:47.099892 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-08-29 17:30:47.099896 | orchestrator | Friday 29 August 2025 17:22:48 +0000 (0:00:01.122) 0:03:42.636 ********* 2025-08-29 17:30:47.099899 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.099903 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.099907 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.099911 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.099914 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.099918 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.099922 | orchestrator | 2025-08-29 17:30:47.099926 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-08-29 17:30:47.099932 | orchestrator | Friday 29 August 2025 17:22:49 +0000 (0:00:00.994) 0:03:43.630 ********* 2025-08-29 17:30:47.099936 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 17:30:47.099940 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 17:30:47.099944 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.099947 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.099951 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.099955 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 17:30:47.099959 | orchestrator | 2025-08-29 17:30:47.099962 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-08-29 17:30:47.099966 | orchestrator | Friday 29 August 2025 17:22:51 +0000 (0:00:01.255) 0:03:44.886 ********* 2025-08-29 17:30:47.099971 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-08-29 17:30:47.099976 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-08-29 17:30:47.099980 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.099984 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-08-29 17:30:47.099988 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-08-29 17:30:47.099992 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.100005 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-08-29 17:30:47.100010 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-08-29 17:30:47.100014 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.100018 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.100022 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.100026 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.100029 | orchestrator | 2025-08-29 17:30:47.100033 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-08-29 17:30:47.100039 | orchestrator | Friday 29 August 2025 17:22:52 +0000 (0:00:00.793) 0:03:45.679 ********* 2025-08-29 17:30:47.100043 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100047 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.100050 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.100054 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.100062 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.100065 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.100069 | orchestrator | 2025-08-29 17:30:47.100073 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-08-29 17:30:47.100077 | orchestrator | Friday 29 August 2025 17:22:52 +0000 (0:00:00.682) 0:03:46.362 ********* 2025-08-29 17:30:47.100080 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100084 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.100088 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.100092 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.100095 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.100099 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.100103 | orchestrator | 2025-08-29 17:30:47.100106 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 17:30:47.100110 | orchestrator | Friday 29 August 2025 17:22:53 +0000 (0:00:00.512) 0:03:46.875 ********* 2025-08-29 17:30:47.100114 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100118 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.100121 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.100125 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.100129 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.100132 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.100136 | orchestrator | 2025-08-29 17:30:47.100140 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 17:30:47.100144 | orchestrator | Friday 29 August 2025 17:22:53 +0000 (0:00:00.694) 0:03:47.569 ********* 2025-08-29 17:30:47.100147 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.100151 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100155 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.100159 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.100162 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.100166 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.100170 | orchestrator | 2025-08-29 17:30:47.100173 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 17:30:47.100177 | orchestrator | Friday 29 August 2025 17:22:54 +0000 (0:00:00.714) 0:03:48.283 ********* 2025-08-29 17:30:47.100181 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100185 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.100188 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.100192 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.100196 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.100199 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.100203 | orchestrator | 2025-08-29 17:30:47.100207 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 17:30:47.100211 | orchestrator | Friday 29 August 2025 17:22:55 +0000 (0:00:01.098) 0:03:49.382 ********* 2025-08-29 17:30:47.100214 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.100218 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.100222 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.100226 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.100229 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.100233 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.100237 | orchestrator | 2025-08-29 17:30:47.100241 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 17:30:47.100244 | orchestrator | Friday 29 August 2025 17:22:57 +0000 (0:00:01.432) 0:03:50.814 ********* 2025-08-29 17:30:47.100248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:30:47.100252 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:30:47.100255 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:30:47.100259 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100263 | orchestrator | 2025-08-29 17:30:47.100269 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 17:30:47.100273 | orchestrator | Friday 29 August 2025 17:22:57 +0000 (0:00:00.324) 0:03:51.139 ********* 2025-08-29 17:30:47.100276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:30:47.100280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:30:47.100284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:30:47.100287 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100291 | orchestrator | 2025-08-29 17:30:47.100295 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 17:30:47.100299 | orchestrator | Friday 29 August 2025 17:22:58 +0000 (0:00:00.784) 0:03:51.923 ********* 2025-08-29 17:30:47.100302 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:30:47.100317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:30:47.100321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:30:47.100325 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100328 | orchestrator | 2025-08-29 17:30:47.100332 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 17:30:47.100336 | orchestrator | Friday 29 August 2025 17:22:59 +0000 (0:00:00.780) 0:03:52.704 ********* 2025-08-29 17:30:47.100340 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.100364 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.100369 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.100373 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.100376 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.100380 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.100384 | orchestrator | 2025-08-29 17:30:47.100387 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 17:30:47.100391 | orchestrator | Friday 29 August 2025 17:23:00 +0000 (0:00:01.222) 0:03:53.926 ********* 2025-08-29 17:30:47.100395 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 17:30:47.100399 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 17:30:47.100404 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 17:30:47.100408 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-08-29 17:30:47.100412 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.100416 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-08-29 17:30:47.100419 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.100423 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-08-29 17:30:47.100427 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.100430 | orchestrator | 2025-08-29 17:30:47.100434 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-08-29 17:30:47.100438 | orchestrator | Friday 29 August 2025 17:23:02 +0000 (0:00:02.142) 0:03:56.069 ********* 2025-08-29 17:30:47.100442 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.100445 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.100449 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.100453 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.100456 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.100460 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.100464 | orchestrator | 2025-08-29 17:30:47.100467 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 17:30:47.100471 | orchestrator | Friday 29 August 2025 17:23:04 +0000 (0:00:02.558) 0:03:58.628 ********* 2025-08-29 17:30:47.100475 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.100478 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.100482 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.100486 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.100489 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.100493 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.100497 | orchestrator | 2025-08-29 17:30:47.100501 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-08-29 17:30:47.100507 | orchestrator | Friday 29 August 2025 17:23:06 +0000 (0:00:01.272) 0:03:59.900 ********* 2025-08-29 17:30:47.100511 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100515 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.100519 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.100522 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.100526 | orchestrator | 2025-08-29 17:30:47.100530 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-08-29 17:30:47.100534 | orchestrator | Friday 29 August 2025 17:23:07 +0000 (0:00:01.248) 0:04:01.149 ********* 2025-08-29 17:30:47.100537 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.100541 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.100545 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.100549 | orchestrator | 2025-08-29 17:30:47.100552 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-08-29 17:30:47.100556 | orchestrator | Friday 29 August 2025 17:23:07 +0000 (0:00:00.422) 0:04:01.572 ********* 2025-08-29 17:30:47.100560 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.100563 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.100567 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.100571 | orchestrator | 2025-08-29 17:30:47.100574 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-08-29 17:30:47.100578 | orchestrator | Friday 29 August 2025 17:23:09 +0000 (0:00:01.229) 0:04:02.802 ********* 2025-08-29 17:30:47.100582 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 17:30:47.100585 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 17:30:47.100589 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 17:30:47.100593 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.100597 | orchestrator | 2025-08-29 17:30:47.100600 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-08-29 17:30:47.100604 | orchestrator | Friday 29 August 2025 17:23:09 +0000 (0:00:00.838) 0:04:03.640 ********* 2025-08-29 17:30:47.100608 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.100611 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.100615 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.100619 | orchestrator | 2025-08-29 17:30:47.100623 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-08-29 17:30:47.100626 | orchestrator | Friday 29 August 2025 17:23:10 +0000 (0:00:00.387) 0:04:04.028 ********* 2025-08-29 17:30:47.100630 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.100634 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.100637 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.100641 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.100645 | orchestrator | 2025-08-29 17:30:47.100649 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-08-29 17:30:47.100652 | orchestrator | Friday 29 August 2025 17:23:11 +0000 (0:00:01.179) 0:04:05.208 ********* 2025-08-29 17:30:47.100656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:30:47.100671 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:30:47.100675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:30:47.100679 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100683 | orchestrator | 2025-08-29 17:30:47.100686 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-08-29 17:30:47.100690 | orchestrator | Friday 29 August 2025 17:23:11 +0000 (0:00:00.398) 0:04:05.606 ********* 2025-08-29 17:30:47.100694 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100698 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.100701 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.100705 | orchestrator | 2025-08-29 17:30:47.100712 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-08-29 17:30:47.100715 | orchestrator | Friday 29 August 2025 17:23:12 +0000 (0:00:00.373) 0:04:05.979 ********* 2025-08-29 17:30:47.100719 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100723 | orchestrator | 2025-08-29 17:30:47.100728 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-08-29 17:30:47.100735 | orchestrator | Friday 29 August 2025 17:23:13 +0000 (0:00:00.886) 0:04:06.866 ********* 2025-08-29 17:30:47.100741 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100745 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.100749 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.100752 | orchestrator | 2025-08-29 17:30:47.100756 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-08-29 17:30:47.100760 | orchestrator | Friday 29 August 2025 17:23:13 +0000 (0:00:00.561) 0:04:07.427 ********* 2025-08-29 17:30:47.100763 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100767 | orchestrator | 2025-08-29 17:30:47.100771 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-08-29 17:30:47.100774 | orchestrator | Friday 29 August 2025 17:23:14 +0000 (0:00:00.325) 0:04:07.753 ********* 2025-08-29 17:30:47.100778 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100782 | orchestrator | 2025-08-29 17:30:47.100786 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-08-29 17:30:47.100789 | orchestrator | Friday 29 August 2025 17:23:14 +0000 (0:00:00.274) 0:04:08.027 ********* 2025-08-29 17:30:47.100793 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100798 | orchestrator | 2025-08-29 17:30:47.100804 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-08-29 17:30:47.100808 | orchestrator | Friday 29 August 2025 17:23:14 +0000 (0:00:00.132) 0:04:08.159 ********* 2025-08-29 17:30:47.100812 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100816 | orchestrator | 2025-08-29 17:30:47.100819 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-08-29 17:30:47.100823 | orchestrator | Friday 29 August 2025 17:23:14 +0000 (0:00:00.244) 0:04:08.404 ********* 2025-08-29 17:30:47.100827 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100831 | orchestrator | 2025-08-29 17:30:47.100834 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-08-29 17:30:47.100838 | orchestrator | Friday 29 August 2025 17:23:15 +0000 (0:00:00.254) 0:04:08.658 ********* 2025-08-29 17:30:47.100842 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:30:47.100846 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:30:47.100853 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:30:47.100857 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100861 | orchestrator | 2025-08-29 17:30:47.100864 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-08-29 17:30:47.100868 | orchestrator | Friday 29 August 2025 17:23:15 +0000 (0:00:00.389) 0:04:09.048 ********* 2025-08-29 17:30:47.100872 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100876 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.100879 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.100883 | orchestrator | 2025-08-29 17:30:47.100887 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-08-29 17:30:47.100891 | orchestrator | Friday 29 August 2025 17:23:15 +0000 (0:00:00.589) 0:04:09.637 ********* 2025-08-29 17:30:47.100896 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100901 | orchestrator | 2025-08-29 17:30:47.100905 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-08-29 17:30:47.100909 | orchestrator | Friday 29 August 2025 17:23:16 +0000 (0:00:00.242) 0:04:09.879 ********* 2025-08-29 17:30:47.100913 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.100916 | orchestrator | 2025-08-29 17:30:47.100920 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-08-29 17:30:47.100926 | orchestrator | Friday 29 August 2025 17:23:16 +0000 (0:00:00.227) 0:04:10.107 ********* 2025-08-29 17:30:47.100930 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.100934 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.100938 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.100941 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.100945 | orchestrator | 2025-08-29 17:30:47.100949 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-08-29 17:30:47.100953 | orchestrator | Friday 29 August 2025 17:23:17 +0000 (0:00:00.921) 0:04:11.028 ********* 2025-08-29 17:30:47.100956 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.100960 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.100964 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.100971 | orchestrator | 2025-08-29 17:30:47.100975 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-08-29 17:30:47.100979 | orchestrator | Friday 29 August 2025 17:23:17 +0000 (0:00:00.616) 0:04:11.645 ********* 2025-08-29 17:30:47.100983 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.100986 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.100990 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.100994 | orchestrator | 2025-08-29 17:30:47.100997 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-08-29 17:30:47.101001 | orchestrator | Friday 29 August 2025 17:23:19 +0000 (0:00:01.183) 0:04:12.828 ********* 2025-08-29 17:30:47.101016 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:30:47.101020 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:30:47.101024 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:30:47.101028 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.101031 | orchestrator | 2025-08-29 17:30:47.101035 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-08-29 17:30:47.101039 | orchestrator | Friday 29 August 2025 17:23:19 +0000 (0:00:00.622) 0:04:13.450 ********* 2025-08-29 17:30:47.101043 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.101046 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.101050 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.101056 | orchestrator | 2025-08-29 17:30:47.101061 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-08-29 17:30:47.101065 | orchestrator | Friday 29 August 2025 17:23:20 +0000 (0:00:00.345) 0:04:13.796 ********* 2025-08-29 17:30:47.101069 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.101072 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.101076 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.101082 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.101086 | orchestrator | 2025-08-29 17:30:47.101090 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-08-29 17:30:47.101094 | orchestrator | Friday 29 August 2025 17:23:21 +0000 (0:00:01.217) 0:04:15.013 ********* 2025-08-29 17:30:47.101097 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.101101 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.101105 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.101108 | orchestrator | 2025-08-29 17:30:47.101112 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-08-29 17:30:47.101116 | orchestrator | Friday 29 August 2025 17:23:21 +0000 (0:00:00.337) 0:04:15.351 ********* 2025-08-29 17:30:47.101120 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.101124 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.101127 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.101131 | orchestrator | 2025-08-29 17:30:47.101135 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-08-29 17:30:47.101139 | orchestrator | Friday 29 August 2025 17:23:23 +0000 (0:00:01.481) 0:04:16.832 ********* 2025-08-29 17:30:47.101145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:30:47.101149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:30:47.101152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:30:47.101156 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.101160 | orchestrator | 2025-08-29 17:30:47.101164 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-08-29 17:30:47.101167 | orchestrator | Friday 29 August 2025 17:23:23 +0000 (0:00:00.642) 0:04:17.475 ********* 2025-08-29 17:30:47.101171 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.101175 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.101178 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.101182 | orchestrator | 2025-08-29 17:30:47.101186 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-08-29 17:30:47.101190 | orchestrator | Friday 29 August 2025 17:23:24 +0000 (0:00:00.353) 0:04:17.828 ********* 2025-08-29 17:30:47.101193 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.101197 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.101201 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.101204 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.101208 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.101212 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.101215 | orchestrator | 2025-08-29 17:30:47.101219 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-08-29 17:30:47.101223 | orchestrator | Friday 29 August 2025 17:23:24 +0000 (0:00:00.647) 0:04:18.476 ********* 2025-08-29 17:30:47.101227 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.101230 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.101234 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.101238 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.101241 | orchestrator | 2025-08-29 17:30:47.101245 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-08-29 17:30:47.101249 | orchestrator | Friday 29 August 2025 17:23:25 +0000 (0:00:01.067) 0:04:19.544 ********* 2025-08-29 17:30:47.101253 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.101256 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.101260 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.101264 | orchestrator | 2025-08-29 17:30:47.101270 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-08-29 17:30:47.101275 | orchestrator | Friday 29 August 2025 17:23:26 +0000 (0:00:00.329) 0:04:19.873 ********* 2025-08-29 17:30:47.101279 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.101283 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.101287 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.101290 | orchestrator | 2025-08-29 17:30:47.101294 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-08-29 17:30:47.101298 | orchestrator | Friday 29 August 2025 17:23:27 +0000 (0:00:01.369) 0:04:21.243 ********* 2025-08-29 17:30:47.101301 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 17:30:47.101305 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 17:30:47.101309 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 17:30:47.101313 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.101316 | orchestrator | 2025-08-29 17:30:47.101320 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-08-29 17:30:47.101324 | orchestrator | Friday 29 August 2025 17:23:28 +0000 (0:00:00.606) 0:04:21.850 ********* 2025-08-29 17:30:47.101328 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.101331 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.101335 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.101339 | orchestrator | 2025-08-29 17:30:47.101361 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-08-29 17:30:47.101369 | orchestrator | 2025-08-29 17:30:47.101373 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 17:30:47.101377 | orchestrator | Friday 29 August 2025 17:23:28 +0000 (0:00:00.664) 0:04:22.514 ********* 2025-08-29 17:30:47.101380 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.101384 | orchestrator | 2025-08-29 17:30:47.101388 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 17:30:47.101392 | orchestrator | Friday 29 August 2025 17:23:29 +0000 (0:00:01.016) 0:04:23.530 ********* 2025-08-29 17:30:47.101395 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.101399 | orchestrator | 2025-08-29 17:30:47.101403 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 17:30:47.101407 | orchestrator | Friday 29 August 2025 17:23:30 +0000 (0:00:00.585) 0:04:24.116 ********* 2025-08-29 17:30:47.101410 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.101416 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.101420 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.101423 | orchestrator | 2025-08-29 17:30:47.101427 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 17:30:47.101431 | orchestrator | Friday 29 August 2025 17:23:31 +0000 (0:00:00.721) 0:04:24.838 ********* 2025-08-29 17:30:47.101435 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.101438 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.101442 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.101446 | orchestrator | 2025-08-29 17:30:47.101449 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 17:30:47.101453 | orchestrator | Friday 29 August 2025 17:23:31 +0000 (0:00:00.343) 0:04:25.181 ********* 2025-08-29 17:30:47.101457 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.101461 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.101464 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.101468 | orchestrator | 2025-08-29 17:30:47.101472 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 17:30:47.101475 | orchestrator | Friday 29 August 2025 17:23:32 +0000 (0:00:00.587) 0:04:25.769 ********* 2025-08-29 17:30:47.101479 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.101483 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.101487 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.101490 | orchestrator | 2025-08-29 17:30:47.101494 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 17:30:47.101498 | orchestrator | Friday 29 August 2025 17:23:32 +0000 (0:00:00.293) 0:04:26.062 ********* 2025-08-29 17:30:47.101501 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.101505 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.101509 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.101512 | orchestrator | 2025-08-29 17:30:47.101516 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 17:30:47.101520 | orchestrator | Friday 29 August 2025 17:23:33 +0000 (0:00:00.719) 0:04:26.781 ********* 2025-08-29 17:30:47.101524 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.101527 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.101531 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.101534 | orchestrator | 2025-08-29 17:30:47.101538 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 17:30:47.101542 | orchestrator | Friday 29 August 2025 17:23:33 +0000 (0:00:00.292) 0:04:27.074 ********* 2025-08-29 17:30:47.101546 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.101549 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.101553 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.101557 | orchestrator | 2025-08-29 17:30:47.101561 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 17:30:47.101567 | orchestrator | Friday 29 August 2025 17:23:33 +0000 (0:00:00.513) 0:04:27.588 ********* 2025-08-29 17:30:47.101570 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.101574 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.101578 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.101582 | orchestrator | 2025-08-29 17:30:47.101589 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 17:30:47.101593 | orchestrator | Friday 29 August 2025 17:23:34 +0000 (0:00:00.740) 0:04:28.328 ********* 2025-08-29 17:30:47.101597 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.101601 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.101604 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.101608 | orchestrator | 2025-08-29 17:30:47.101612 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 17:30:47.101615 | orchestrator | Friday 29 August 2025 17:23:35 +0000 (0:00:00.666) 0:04:28.994 ********* 2025-08-29 17:30:47.101619 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.101623 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.101627 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.101630 | orchestrator | 2025-08-29 17:30:47.101634 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 17:30:47.101638 | orchestrator | Friday 29 August 2025 17:23:35 +0000 (0:00:00.300) 0:04:29.295 ********* 2025-08-29 17:30:47.101641 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.101645 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.101649 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.101652 | orchestrator | 2025-08-29 17:30:47.101656 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 17:30:47.101660 | orchestrator | Friday 29 August 2025 17:23:36 +0000 (0:00:00.624) 0:04:29.920 ********* 2025-08-29 17:30:47.101663 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.101667 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.101671 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.101675 | orchestrator | 2025-08-29 17:30:47.101678 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 17:30:47.101682 | orchestrator | Friday 29 August 2025 17:23:36 +0000 (0:00:00.321) 0:04:30.242 ********* 2025-08-29 17:30:47.101686 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.101690 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.101704 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.101709 | orchestrator | 2025-08-29 17:30:47.101713 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 17:30:47.101716 | orchestrator | Friday 29 August 2025 17:23:36 +0000 (0:00:00.336) 0:04:30.578 ********* 2025-08-29 17:30:47.101720 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.101724 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.101727 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.101731 | orchestrator | 2025-08-29 17:30:47.101735 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 17:30:47.101738 | orchestrator | Friday 29 August 2025 17:23:37 +0000 (0:00:00.335) 0:04:30.913 ********* 2025-08-29 17:30:47.101742 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.101746 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.101749 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.101753 | orchestrator | 2025-08-29 17:30:47.101757 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 17:30:47.101761 | orchestrator | Friday 29 August 2025 17:23:37 +0000 (0:00:00.562) 0:04:31.476 ********* 2025-08-29 17:30:47.101764 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.101768 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.101773 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.101777 | orchestrator | 2025-08-29 17:30:47.101781 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 17:30:47.101785 | orchestrator | Friday 29 August 2025 17:23:38 +0000 (0:00:00.302) 0:04:31.778 ********* 2025-08-29 17:30:47.101792 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.101795 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.101799 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.101803 | orchestrator | 2025-08-29 17:30:47.101807 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 17:30:47.101810 | orchestrator | Friday 29 August 2025 17:23:38 +0000 (0:00:00.346) 0:04:32.125 ********* 2025-08-29 17:30:47.101814 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.101818 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.101821 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.101825 | orchestrator | 2025-08-29 17:30:47.101829 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 17:30:47.101832 | orchestrator | Friday 29 August 2025 17:23:38 +0000 (0:00:00.373) 0:04:32.498 ********* 2025-08-29 17:30:47.101836 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.101840 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.101843 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.101847 | orchestrator | 2025-08-29 17:30:47.101851 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-08-29 17:30:47.101855 | orchestrator | Friday 29 August 2025 17:23:39 +0000 (0:00:00.844) 0:04:33.343 ********* 2025-08-29 17:30:47.101858 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.101862 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.101866 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.101869 | orchestrator | 2025-08-29 17:30:47.101873 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-08-29 17:30:47.101877 | orchestrator | Friday 29 August 2025 17:23:40 +0000 (0:00:00.366) 0:04:33.709 ********* 2025-08-29 17:30:47.101882 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-08-29 17:30:47.101888 | orchestrator | 2025-08-29 17:30:47.101892 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-08-29 17:30:47.101896 | orchestrator | Friday 29 August 2025 17:23:40 +0000 (0:00:00.684) 0:04:34.394 ********* 2025-08-29 17:30:47.101900 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.101903 | orchestrator | 2025-08-29 17:30:47.101907 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-08-29 17:30:47.101911 | orchestrator | Friday 29 August 2025 17:23:41 +0000 (0:00:00.418) 0:04:34.813 ********* 2025-08-29 17:30:47.101914 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-08-29 17:30:47.101918 | orchestrator | 2025-08-29 17:30:47.101922 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-08-29 17:30:47.101926 | orchestrator | Friday 29 August 2025 17:23:42 +0000 (0:00:01.180) 0:04:35.994 ********* 2025-08-29 17:30:47.101929 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.101933 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.101937 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.101940 | orchestrator | 2025-08-29 17:30:47.101944 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-08-29 17:30:47.101948 | orchestrator | Friday 29 August 2025 17:23:42 +0000 (0:00:00.398) 0:04:36.392 ********* 2025-08-29 17:30:47.101952 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.101955 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.101959 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.101962 | orchestrator | 2025-08-29 17:30:47.101966 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-08-29 17:30:47.101970 | orchestrator | Friday 29 August 2025 17:23:43 +0000 (0:00:00.398) 0:04:36.791 ********* 2025-08-29 17:30:47.101975 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.101981 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.101984 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.101988 | orchestrator | 2025-08-29 17:30:47.101992 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-08-29 17:30:47.101996 | orchestrator | Friday 29 August 2025 17:23:44 +0000 (0:00:01.356) 0:04:38.147 ********* 2025-08-29 17:30:47.102002 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.102006 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.102009 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.102025 | orchestrator | 2025-08-29 17:30:47.102030 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-08-29 17:30:47.102033 | orchestrator | Friday 29 August 2025 17:23:45 +0000 (0:00:01.064) 0:04:39.211 ********* 2025-08-29 17:30:47.102037 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.102041 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.102045 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.102048 | orchestrator | 2025-08-29 17:30:47.102052 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-08-29 17:30:47.102067 | orchestrator | Friday 29 August 2025 17:23:46 +0000 (0:00:00.657) 0:04:39.869 ********* 2025-08-29 17:30:47.102071 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.102075 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.102079 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.102083 | orchestrator | 2025-08-29 17:30:47.102087 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-08-29 17:30:47.102094 | orchestrator | Friday 29 August 2025 17:23:46 +0000 (0:00:00.691) 0:04:40.560 ********* 2025-08-29 17:30:47.102098 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.102102 | orchestrator | 2025-08-29 17:30:47.102105 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-08-29 17:30:47.102109 | orchestrator | Friday 29 August 2025 17:23:48 +0000 (0:00:01.280) 0:04:41.841 ********* 2025-08-29 17:30:47.102113 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.102116 | orchestrator | 2025-08-29 17:30:47.102120 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-08-29 17:30:47.102124 | orchestrator | Friday 29 August 2025 17:23:48 +0000 (0:00:00.765) 0:04:42.607 ********* 2025-08-29 17:30:47.102128 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 17:30:47.102131 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:30:47.102139 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:30:47.102143 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 17:30:47.102147 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-08-29 17:30:47.102150 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 17:30:47.102154 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 17:30:47.102158 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-08-29 17:30:47.102162 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 17:30:47.102165 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-08-29 17:30:47.102169 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-08-29 17:30:47.102173 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-08-29 17:30:47.102176 | orchestrator | 2025-08-29 17:30:47.102180 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-08-29 17:30:47.102184 | orchestrator | Friday 29 August 2025 17:23:52 +0000 (0:00:03.391) 0:04:45.999 ********* 2025-08-29 17:30:47.102188 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.102191 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.102195 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.102198 | orchestrator | 2025-08-29 17:30:47.102202 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-08-29 17:30:47.102206 | orchestrator | Friday 29 August 2025 17:23:53 +0000 (0:00:01.468) 0:04:47.468 ********* 2025-08-29 17:30:47.102210 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.102213 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.102217 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.102221 | orchestrator | 2025-08-29 17:30:47.102224 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-08-29 17:30:47.102231 | orchestrator | Friday 29 August 2025 17:23:54 +0000 (0:00:00.420) 0:04:47.888 ********* 2025-08-29 17:30:47.102235 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.102239 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.102242 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.102246 | orchestrator | 2025-08-29 17:30:47.102250 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-08-29 17:30:47.102253 | orchestrator | Friday 29 August 2025 17:23:54 +0000 (0:00:00.394) 0:04:48.282 ********* 2025-08-29 17:30:47.102257 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.102261 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.102265 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.102268 | orchestrator | 2025-08-29 17:30:47.102272 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-08-29 17:30:47.102276 | orchestrator | Friday 29 August 2025 17:23:56 +0000 (0:00:01.868) 0:04:50.151 ********* 2025-08-29 17:30:47.102280 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.102283 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.102287 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.102291 | orchestrator | 2025-08-29 17:30:47.102294 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-08-29 17:30:47.102298 | orchestrator | Friday 29 August 2025 17:23:58 +0000 (0:00:01.626) 0:04:51.778 ********* 2025-08-29 17:30:47.102302 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.102306 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.102309 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.102313 | orchestrator | 2025-08-29 17:30:47.102317 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-08-29 17:30:47.102321 | orchestrator | Friday 29 August 2025 17:23:58 +0000 (0:00:00.324) 0:04:52.102 ********* 2025-08-29 17:30:47.102324 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.102328 | orchestrator | 2025-08-29 17:30:47.102332 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-08-29 17:30:47.102336 | orchestrator | Friday 29 August 2025 17:23:59 +0000 (0:00:00.599) 0:04:52.701 ********* 2025-08-29 17:30:47.102339 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.102350 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.102354 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.102357 | orchestrator | 2025-08-29 17:30:47.102361 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-08-29 17:30:47.102365 | orchestrator | Friday 29 August 2025 17:23:59 +0000 (0:00:00.660) 0:04:53.362 ********* 2025-08-29 17:30:47.102368 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.102372 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.102376 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.102380 | orchestrator | 2025-08-29 17:30:47.102383 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-08-29 17:30:47.102387 | orchestrator | Friday 29 August 2025 17:24:00 +0000 (0:00:00.361) 0:04:53.723 ********* 2025-08-29 17:30:47.102402 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-08-29 17:30:47.102407 | orchestrator | 2025-08-29 17:30:47.102411 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-08-29 17:30:47.102414 | orchestrator | Friday 29 August 2025 17:24:00 +0000 (0:00:00.542) 0:04:54.266 ********* 2025-08-29 17:30:47.102418 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.102422 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.102425 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.102429 | orchestrator | 2025-08-29 17:30:47.102433 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-08-29 17:30:47.102440 | orchestrator | Friday 29 August 2025 17:24:02 +0000 (0:00:02.008) 0:04:56.274 ********* 2025-08-29 17:30:47.102447 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.102451 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.102455 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.102458 | orchestrator | 2025-08-29 17:30:47.102462 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-08-29 17:30:47.102466 | orchestrator | Friday 29 August 2025 17:24:04 +0000 (0:00:01.443) 0:04:57.718 ********* 2025-08-29 17:30:47.102470 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.102476 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.102480 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.102484 | orchestrator | 2025-08-29 17:30:47.102487 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-08-29 17:30:47.102491 | orchestrator | Friday 29 August 2025 17:24:05 +0000 (0:00:01.789) 0:04:59.507 ********* 2025-08-29 17:30:47.102495 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.102498 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.102502 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.102506 | orchestrator | 2025-08-29 17:30:47.102509 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-08-29 17:30:47.102513 | orchestrator | Friday 29 August 2025 17:24:07 +0000 (0:00:01.954) 0:05:01.462 ********* 2025-08-29 17:30:47.102517 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.102521 | orchestrator | 2025-08-29 17:30:47.102524 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-08-29 17:30:47.102528 | orchestrator | Friday 29 August 2025 17:24:08 +0000 (0:00:00.798) 0:05:02.261 ********* 2025-08-29 17:30:47.102532 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.102536 | orchestrator | 2025-08-29 17:30:47.102539 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-08-29 17:30:47.102543 | orchestrator | Friday 29 August 2025 17:24:09 +0000 (0:00:01.205) 0:05:03.467 ********* 2025-08-29 17:30:47.102547 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.102551 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.102554 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.102558 | orchestrator | 2025-08-29 17:30:47.102562 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-08-29 17:30:47.102566 | orchestrator | Friday 29 August 2025 17:24:19 +0000 (0:00:09.865) 0:05:13.332 ********* 2025-08-29 17:30:47.102569 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.102573 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.102577 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.102580 | orchestrator | 2025-08-29 17:30:47.102584 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-08-29 17:30:47.102588 | orchestrator | Friday 29 August 2025 17:24:19 +0000 (0:00:00.301) 0:05:13.634 ********* 2025-08-29 17:30:47.102592 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__43cc211cb1cf08b823d0c171b9856e0b22ca7791'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-08-29 17:30:47.102597 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__43cc211cb1cf08b823d0c171b9856e0b22ca7791'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-08-29 17:30:47.102601 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__43cc211cb1cf08b823d0c171b9856e0b22ca7791'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-08-29 17:30:47.102608 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__43cc211cb1cf08b823d0c171b9856e0b22ca7791'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-08-29 17:30:47.102625 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__43cc211cb1cf08b823d0c171b9856e0b22ca7791'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-08-29 17:30:47.102631 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__43cc211cb1cf08b823d0c171b9856e0b22ca7791'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__43cc211cb1cf08b823d0c171b9856e0b22ca7791'}])  2025-08-29 17:30:47.102635 | orchestrator | 2025-08-29 17:30:47.102639 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 17:30:47.102642 | orchestrator | Friday 29 August 2025 17:24:34 +0000 (0:00:14.712) 0:05:28.346 ********* 2025-08-29 17:30:47.102646 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.102652 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.102656 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.102660 | orchestrator | 2025-08-29 17:30:47.102664 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-08-29 17:30:47.102667 | orchestrator | Friday 29 August 2025 17:24:35 +0000 (0:00:00.419) 0:05:28.766 ********* 2025-08-29 17:30:47.102671 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.102675 | orchestrator | 2025-08-29 17:30:47.102678 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-08-29 17:30:47.102682 | orchestrator | Friday 29 August 2025 17:24:35 +0000 (0:00:00.536) 0:05:29.302 ********* 2025-08-29 17:30:47.102686 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.102690 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.102693 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.102697 | orchestrator | 2025-08-29 17:30:47.102701 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-08-29 17:30:47.102705 | orchestrator | Friday 29 August 2025 17:24:36 +0000 (0:00:00.650) 0:05:29.953 ********* 2025-08-29 17:30:47.102708 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.102712 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.102716 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.102719 | orchestrator | 2025-08-29 17:30:47.102723 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-08-29 17:30:47.102727 | orchestrator | Friday 29 August 2025 17:24:36 +0000 (0:00:00.373) 0:05:30.326 ********* 2025-08-29 17:30:47.102731 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 17:30:47.102734 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 17:30:47.102738 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 17:30:47.102742 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.102746 | orchestrator | 2025-08-29 17:30:47.102749 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-08-29 17:30:47.102753 | orchestrator | Friday 29 August 2025 17:24:37 +0000 (0:00:00.605) 0:05:30.931 ********* 2025-08-29 17:30:47.102757 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.102761 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.102764 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.102771 | orchestrator | 2025-08-29 17:30:47.102774 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-08-29 17:30:47.102778 | orchestrator | 2025-08-29 17:30:47.102782 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 17:30:47.102786 | orchestrator | Friday 29 August 2025 17:24:38 +0000 (0:00:00.857) 0:05:31.788 ********* 2025-08-29 17:30:47.102789 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.102793 | orchestrator | 2025-08-29 17:30:47.102797 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 17:30:47.102800 | orchestrator | Friday 29 August 2025 17:24:38 +0000 (0:00:00.533) 0:05:32.322 ********* 2025-08-29 17:30:47.102804 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.102808 | orchestrator | 2025-08-29 17:30:47.102812 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 17:30:47.102815 | orchestrator | Friday 29 August 2025 17:24:39 +0000 (0:00:00.570) 0:05:32.893 ********* 2025-08-29 17:30:47.102819 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.102823 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.102827 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.102830 | orchestrator | 2025-08-29 17:30:47.102835 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 17:30:47.102841 | orchestrator | Friday 29 August 2025 17:24:40 +0000 (0:00:01.115) 0:05:34.009 ********* 2025-08-29 17:30:47.102845 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.102849 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.102853 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.102856 | orchestrator | 2025-08-29 17:30:47.102860 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 17:30:47.102864 | orchestrator | Friday 29 August 2025 17:24:40 +0000 (0:00:00.353) 0:05:34.363 ********* 2025-08-29 17:30:47.102867 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.102871 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.102875 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.102878 | orchestrator | 2025-08-29 17:30:47.102882 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 17:30:47.102886 | orchestrator | Friday 29 August 2025 17:24:41 +0000 (0:00:00.382) 0:05:34.745 ********* 2025-08-29 17:30:47.102890 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.102893 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.102907 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.102911 | orchestrator | 2025-08-29 17:30:47.102915 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 17:30:47.102919 | orchestrator | Friday 29 August 2025 17:24:41 +0000 (0:00:00.311) 0:05:35.056 ********* 2025-08-29 17:30:47.102923 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.102927 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.102930 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.102934 | orchestrator | 2025-08-29 17:30:47.102938 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 17:30:47.102942 | orchestrator | Friday 29 August 2025 17:24:42 +0000 (0:00:01.023) 0:05:36.080 ********* 2025-08-29 17:30:47.102945 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.102949 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.102953 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.102957 | orchestrator | 2025-08-29 17:30:47.102961 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 17:30:47.102964 | orchestrator | Friday 29 August 2025 17:24:42 +0000 (0:00:00.377) 0:05:36.458 ********* 2025-08-29 17:30:47.102968 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.102972 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.102976 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.102982 | orchestrator | 2025-08-29 17:30:47.102987 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 17:30:47.102991 | orchestrator | Friday 29 August 2025 17:24:43 +0000 (0:00:00.314) 0:05:36.772 ********* 2025-08-29 17:30:47.102995 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.102999 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.103003 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.103006 | orchestrator | 2025-08-29 17:30:47.103010 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 17:30:47.103014 | orchestrator | Friday 29 August 2025 17:24:43 +0000 (0:00:00.765) 0:05:37.538 ********* 2025-08-29 17:30:47.103018 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.103021 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.103025 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.103029 | orchestrator | 2025-08-29 17:30:47.103032 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 17:30:47.103036 | orchestrator | Friday 29 August 2025 17:24:45 +0000 (0:00:01.128) 0:05:38.667 ********* 2025-08-29 17:30:47.103040 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.103044 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.103047 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.103051 | orchestrator | 2025-08-29 17:30:47.103055 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 17:30:47.103059 | orchestrator | Friday 29 August 2025 17:24:45 +0000 (0:00:00.357) 0:05:39.025 ********* 2025-08-29 17:30:47.103063 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.103066 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.103070 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.103074 | orchestrator | 2025-08-29 17:30:47.103078 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 17:30:47.103082 | orchestrator | Friday 29 August 2025 17:24:45 +0000 (0:00:00.363) 0:05:39.388 ********* 2025-08-29 17:30:47.103085 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.103089 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.103093 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.103096 | orchestrator | 2025-08-29 17:30:47.103100 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 17:30:47.103104 | orchestrator | Friday 29 August 2025 17:24:46 +0000 (0:00:00.342) 0:05:39.730 ********* 2025-08-29 17:30:47.103108 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.103111 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.103115 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.103119 | orchestrator | 2025-08-29 17:30:47.103123 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 17:30:47.103126 | orchestrator | Friday 29 August 2025 17:24:46 +0000 (0:00:00.621) 0:05:40.352 ********* 2025-08-29 17:30:47.103130 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.103134 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.103138 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.103141 | orchestrator | 2025-08-29 17:30:47.103145 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 17:30:47.103149 | orchestrator | Friday 29 August 2025 17:24:47 +0000 (0:00:00.315) 0:05:40.668 ********* 2025-08-29 17:30:47.103153 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.103156 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.103160 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.103164 | orchestrator | 2025-08-29 17:30:47.103168 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 17:30:47.103171 | orchestrator | Friday 29 August 2025 17:24:47 +0000 (0:00:00.326) 0:05:40.995 ********* 2025-08-29 17:30:47.103175 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.103179 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.103182 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.103186 | orchestrator | 2025-08-29 17:30:47.103190 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 17:30:47.103196 | orchestrator | Friday 29 August 2025 17:24:47 +0000 (0:00:00.318) 0:05:41.313 ********* 2025-08-29 17:30:47.103200 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.103204 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.103208 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.103211 | orchestrator | 2025-08-29 17:30:47.103215 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 17:30:47.103219 | orchestrator | Friday 29 August 2025 17:24:47 +0000 (0:00:00.325) 0:05:41.639 ********* 2025-08-29 17:30:47.103223 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.103226 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.103230 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.103234 | orchestrator | 2025-08-29 17:30:47.103238 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 17:30:47.103241 | orchestrator | Friday 29 August 2025 17:24:48 +0000 (0:00:00.641) 0:05:42.280 ********* 2025-08-29 17:30:47.103245 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.103249 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.103253 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.103256 | orchestrator | 2025-08-29 17:30:47.103270 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-08-29 17:30:47.103278 | orchestrator | Friday 29 August 2025 17:24:49 +0000 (0:00:00.564) 0:05:42.845 ********* 2025-08-29 17:30:47.103282 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 17:30:47.103286 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 17:30:47.103290 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 17:30:47.103293 | orchestrator | 2025-08-29 17:30:47.103297 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-08-29 17:30:47.103301 | orchestrator | Friday 29 August 2025 17:24:50 +0000 (0:00:00.920) 0:05:43.766 ********* 2025-08-29 17:30:47.103305 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.103308 | orchestrator | 2025-08-29 17:30:47.103312 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-08-29 17:30:47.103316 | orchestrator | Friday 29 August 2025 17:24:50 +0000 (0:00:00.840) 0:05:44.606 ********* 2025-08-29 17:30:47.103320 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.103326 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.103329 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.103333 | orchestrator | 2025-08-29 17:30:47.103337 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-08-29 17:30:47.103341 | orchestrator | Friday 29 August 2025 17:24:51 +0000 (0:00:00.738) 0:05:45.344 ********* 2025-08-29 17:30:47.103363 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.103367 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.103372 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.103379 | orchestrator | 2025-08-29 17:30:47.103383 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-08-29 17:30:47.103387 | orchestrator | Friday 29 August 2025 17:24:52 +0000 (0:00:00.379) 0:05:45.724 ********* 2025-08-29 17:30:47.103390 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 17:30:47.103394 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 17:30:47.103398 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 17:30:47.103401 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-08-29 17:30:47.103405 | orchestrator | 2025-08-29 17:30:47.103409 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-08-29 17:30:47.103413 | orchestrator | Friday 29 August 2025 17:25:02 +0000 (0:00:10.502) 0:05:56.227 ********* 2025-08-29 17:30:47.103416 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.103420 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.103424 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.103430 | orchestrator | 2025-08-29 17:30:47.103434 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-08-29 17:30:47.103438 | orchestrator | Friday 29 August 2025 17:25:03 +0000 (0:00:00.761) 0:05:56.988 ********* 2025-08-29 17:30:47.103441 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 17:30:47.103445 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 17:30:47.103449 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 17:30:47.103453 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-08-29 17:30:47.103456 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:30:47.103460 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:30:47.103464 | orchestrator | 2025-08-29 17:30:47.103471 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-08-29 17:30:47.103475 | orchestrator | Friday 29 August 2025 17:25:05 +0000 (0:00:02.435) 0:05:59.424 ********* 2025-08-29 17:30:47.103479 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 17:30:47.103483 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 17:30:47.103486 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 17:30:47.103490 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 17:30:47.103494 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 17:30:47.103498 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-08-29 17:30:47.103501 | orchestrator | 2025-08-29 17:30:47.103505 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-08-29 17:30:47.103509 | orchestrator | Friday 29 August 2025 17:25:06 +0000 (0:00:01.201) 0:06:00.626 ********* 2025-08-29 17:30:47.103513 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.103516 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.103520 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.103524 | orchestrator | 2025-08-29 17:30:47.103527 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-08-29 17:30:47.103531 | orchestrator | Friday 29 August 2025 17:25:07 +0000 (0:00:00.724) 0:06:01.350 ********* 2025-08-29 17:30:47.103535 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.103539 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.103542 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.103546 | orchestrator | 2025-08-29 17:30:47.103550 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-08-29 17:30:47.103554 | orchestrator | Friday 29 August 2025 17:25:08 +0000 (0:00:00.334) 0:06:01.685 ********* 2025-08-29 17:30:47.103557 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.103561 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.103565 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.103568 | orchestrator | 2025-08-29 17:30:47.103572 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-08-29 17:30:47.103576 | orchestrator | Friday 29 August 2025 17:25:08 +0000 (0:00:00.636) 0:06:02.321 ********* 2025-08-29 17:30:47.103580 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.103584 | orchestrator | 2025-08-29 17:30:47.103587 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-08-29 17:30:47.103591 | orchestrator | Friday 29 August 2025 17:25:09 +0000 (0:00:00.618) 0:06:02.940 ********* 2025-08-29 17:30:47.103607 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.103611 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.103615 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.103619 | orchestrator | 2025-08-29 17:30:47.103622 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-08-29 17:30:47.103626 | orchestrator | Friday 29 August 2025 17:25:09 +0000 (0:00:00.391) 0:06:03.331 ********* 2025-08-29 17:30:47.103630 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.103634 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.103640 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.103643 | orchestrator | 2025-08-29 17:30:47.103647 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-08-29 17:30:47.103651 | orchestrator | Friday 29 August 2025 17:25:10 +0000 (0:00:00.614) 0:06:03.946 ********* 2025-08-29 17:30:47.103655 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.103658 | orchestrator | 2025-08-29 17:30:47.103664 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-08-29 17:30:47.103670 | orchestrator | Friday 29 August 2025 17:25:10 +0000 (0:00:00.549) 0:06:04.496 ********* 2025-08-29 17:30:47.103676 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.103680 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.103683 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.103687 | orchestrator | 2025-08-29 17:30:47.103691 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-08-29 17:30:47.103694 | orchestrator | Friday 29 August 2025 17:25:12 +0000 (0:00:01.217) 0:06:05.714 ********* 2025-08-29 17:30:47.103698 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.103702 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.103705 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.103709 | orchestrator | 2025-08-29 17:30:47.103713 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-08-29 17:30:47.103716 | orchestrator | Friday 29 August 2025 17:25:13 +0000 (0:00:01.438) 0:06:07.152 ********* 2025-08-29 17:30:47.103720 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.103724 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.103727 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.103731 | orchestrator | 2025-08-29 17:30:47.103735 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-08-29 17:30:47.103738 | orchestrator | Friday 29 August 2025 17:25:15 +0000 (0:00:01.628) 0:06:08.781 ********* 2025-08-29 17:30:47.103742 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.103746 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.103749 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.103753 | orchestrator | 2025-08-29 17:30:47.103757 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-08-29 17:30:47.103760 | orchestrator | Friday 29 August 2025 17:25:16 +0000 (0:00:01.805) 0:06:10.586 ********* 2025-08-29 17:30:47.103764 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.103768 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.103772 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-08-29 17:30:47.103775 | orchestrator | 2025-08-29 17:30:47.103779 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-08-29 17:30:47.103783 | orchestrator | Friday 29 August 2025 17:25:17 +0000 (0:00:00.421) 0:06:11.007 ********* 2025-08-29 17:30:47.103786 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-08-29 17:30:47.103790 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-08-29 17:30:47.103794 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-08-29 17:30:47.103797 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-08-29 17:30:47.103801 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:30:47.103805 | orchestrator | 2025-08-29 17:30:47.103809 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-08-29 17:30:47.103812 | orchestrator | Friday 29 August 2025 17:25:42 +0000 (0:00:24.760) 0:06:35.768 ********* 2025-08-29 17:30:47.103816 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:30:47.103820 | orchestrator | 2025-08-29 17:30:47.103823 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-08-29 17:30:47.103829 | orchestrator | Friday 29 August 2025 17:25:43 +0000 (0:00:01.156) 0:06:36.925 ********* 2025-08-29 17:30:47.103833 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.103837 | orchestrator | 2025-08-29 17:30:47.103841 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-08-29 17:30:47.103844 | orchestrator | Friday 29 August 2025 17:25:43 +0000 (0:00:00.339) 0:06:37.264 ********* 2025-08-29 17:30:47.103848 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.103852 | orchestrator | 2025-08-29 17:30:47.103856 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-08-29 17:30:47.103859 | orchestrator | Friday 29 August 2025 17:25:43 +0000 (0:00:00.161) 0:06:37.426 ********* 2025-08-29 17:30:47.103863 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-08-29 17:30:47.103867 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-08-29 17:30:47.103870 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-08-29 17:30:47.103874 | orchestrator | 2025-08-29 17:30:47.103878 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-08-29 17:30:47.103882 | orchestrator | Friday 29 August 2025 17:25:50 +0000 (0:00:06.239) 0:06:43.665 ********* 2025-08-29 17:30:47.103885 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-08-29 17:30:47.103900 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-08-29 17:30:47.103904 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-08-29 17:30:47.103908 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-08-29 17:30:47.103911 | orchestrator | 2025-08-29 17:30:47.103915 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 17:30:47.103919 | orchestrator | Friday 29 August 2025 17:25:55 +0000 (0:00:05.000) 0:06:48.665 ********* 2025-08-29 17:30:47.103923 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.103926 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.103930 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.103934 | orchestrator | 2025-08-29 17:30:47.103937 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-08-29 17:30:47.103941 | orchestrator | Friday 29 August 2025 17:25:55 +0000 (0:00:00.971) 0:06:49.637 ********* 2025-08-29 17:30:47.103945 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.103949 | orchestrator | 2025-08-29 17:30:47.103952 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-08-29 17:30:47.103959 | orchestrator | Friday 29 August 2025 17:25:56 +0000 (0:00:00.577) 0:06:50.215 ********* 2025-08-29 17:30:47.103963 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.103967 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.103970 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.103974 | orchestrator | 2025-08-29 17:30:47.103978 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-08-29 17:30:47.103982 | orchestrator | Friday 29 August 2025 17:25:56 +0000 (0:00:00.410) 0:06:50.625 ********* 2025-08-29 17:30:47.103985 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.103989 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.103993 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.103996 | orchestrator | 2025-08-29 17:30:47.104000 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-08-29 17:30:47.104004 | orchestrator | Friday 29 August 2025 17:25:58 +0000 (0:00:01.507) 0:06:52.133 ********* 2025-08-29 17:30:47.104008 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 17:30:47.104011 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 17:30:47.104015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 17:30:47.104019 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.104025 | orchestrator | 2025-08-29 17:30:47.104028 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-08-29 17:30:47.104032 | orchestrator | Friday 29 August 2025 17:25:59 +0000 (0:00:00.564) 0:06:52.697 ********* 2025-08-29 17:30:47.104036 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.104040 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.104043 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.104047 | orchestrator | 2025-08-29 17:30:47.104051 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-08-29 17:30:47.104055 | orchestrator | 2025-08-29 17:30:47.104058 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 17:30:47.104062 | orchestrator | Friday 29 August 2025 17:25:59 +0000 (0:00:00.581) 0:06:53.279 ********* 2025-08-29 17:30:47.104066 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.104069 | orchestrator | 2025-08-29 17:30:47.104073 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 17:30:47.104077 | orchestrator | Friday 29 August 2025 17:26:00 +0000 (0:00:00.737) 0:06:54.016 ********* 2025-08-29 17:30:47.104081 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.104084 | orchestrator | 2025-08-29 17:30:47.104088 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 17:30:47.104092 | orchestrator | Friday 29 August 2025 17:26:00 +0000 (0:00:00.563) 0:06:54.580 ********* 2025-08-29 17:30:47.104095 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.104099 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.104103 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.104107 | orchestrator | 2025-08-29 17:30:47.104110 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 17:30:47.104114 | orchestrator | Friday 29 August 2025 17:26:01 +0000 (0:00:00.326) 0:06:54.906 ********* 2025-08-29 17:30:47.104118 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.104122 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.104126 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.104129 | orchestrator | 2025-08-29 17:30:47.104133 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 17:30:47.104137 | orchestrator | Friday 29 August 2025 17:26:02 +0000 (0:00:01.012) 0:06:55.918 ********* 2025-08-29 17:30:47.104140 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.104145 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.104152 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.104156 | orchestrator | 2025-08-29 17:30:47.104159 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 17:30:47.104163 | orchestrator | Friday 29 August 2025 17:26:02 +0000 (0:00:00.729) 0:06:56.648 ********* 2025-08-29 17:30:47.104167 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.104170 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.104174 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.104178 | orchestrator | 2025-08-29 17:30:47.104182 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 17:30:47.104185 | orchestrator | Friday 29 August 2025 17:26:03 +0000 (0:00:00.677) 0:06:57.326 ********* 2025-08-29 17:30:47.104189 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.104193 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.104196 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.104200 | orchestrator | 2025-08-29 17:30:47.104204 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 17:30:47.104207 | orchestrator | Friday 29 August 2025 17:26:03 +0000 (0:00:00.304) 0:06:57.631 ********* 2025-08-29 17:30:47.104221 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.104225 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.104229 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.104235 | orchestrator | 2025-08-29 17:30:47.104239 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 17:30:47.104243 | orchestrator | Friday 29 August 2025 17:26:04 +0000 (0:00:00.613) 0:06:58.244 ********* 2025-08-29 17:30:47.104247 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.104250 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.104254 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.104258 | orchestrator | 2025-08-29 17:30:47.104261 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 17:30:47.104265 | orchestrator | Friday 29 August 2025 17:26:05 +0000 (0:00:00.492) 0:06:58.737 ********* 2025-08-29 17:30:47.104269 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.104272 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.104276 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.104280 | orchestrator | 2025-08-29 17:30:47.104283 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 17:30:47.104287 | orchestrator | Friday 29 August 2025 17:26:05 +0000 (0:00:00.907) 0:06:59.644 ********* 2025-08-29 17:30:47.104293 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.104297 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.104300 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.104304 | orchestrator | 2025-08-29 17:30:47.104308 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 17:30:47.104312 | orchestrator | Friday 29 August 2025 17:26:06 +0000 (0:00:00.741) 0:07:00.386 ********* 2025-08-29 17:30:47.104315 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.104319 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.104323 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.104326 | orchestrator | 2025-08-29 17:30:47.104330 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 17:30:47.104334 | orchestrator | Friday 29 August 2025 17:26:07 +0000 (0:00:00.614) 0:07:01.000 ********* 2025-08-29 17:30:47.104338 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.104349 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.104353 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.104356 | orchestrator | 2025-08-29 17:30:47.104360 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 17:30:47.104364 | orchestrator | Friday 29 August 2025 17:26:07 +0000 (0:00:00.340) 0:07:01.341 ********* 2025-08-29 17:30:47.104368 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.104371 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.104375 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.104379 | orchestrator | 2025-08-29 17:30:47.104382 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 17:30:47.104386 | orchestrator | Friday 29 August 2025 17:26:08 +0000 (0:00:00.349) 0:07:01.691 ********* 2025-08-29 17:30:47.104390 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.104394 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.104397 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.104401 | orchestrator | 2025-08-29 17:30:47.104405 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 17:30:47.104409 | orchestrator | Friday 29 August 2025 17:26:08 +0000 (0:00:00.351) 0:07:02.043 ********* 2025-08-29 17:30:47.104412 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.104416 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.104420 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.104423 | orchestrator | 2025-08-29 17:30:47.104427 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 17:30:47.104431 | orchestrator | Friday 29 August 2025 17:26:09 +0000 (0:00:00.620) 0:07:02.663 ********* 2025-08-29 17:30:47.104435 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.104439 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.104442 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.104446 | orchestrator | 2025-08-29 17:30:47.104450 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 17:30:47.104456 | orchestrator | Friday 29 August 2025 17:26:09 +0000 (0:00:00.350) 0:07:03.013 ********* 2025-08-29 17:30:47.104460 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.104463 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.104467 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.104471 | orchestrator | 2025-08-29 17:30:47.104474 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 17:30:47.104478 | orchestrator | Friday 29 August 2025 17:26:09 +0000 (0:00:00.320) 0:07:03.333 ********* 2025-08-29 17:30:47.104482 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.104486 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.104489 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.104493 | orchestrator | 2025-08-29 17:30:47.104497 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 17:30:47.104500 | orchestrator | Friday 29 August 2025 17:26:10 +0000 (0:00:00.334) 0:07:03.668 ********* 2025-08-29 17:30:47.104504 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.104508 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.104511 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.104515 | orchestrator | 2025-08-29 17:30:47.104519 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 17:30:47.104523 | orchestrator | Friday 29 August 2025 17:26:10 +0000 (0:00:00.621) 0:07:04.290 ********* 2025-08-29 17:30:47.104526 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.104530 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.104534 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.104537 | orchestrator | 2025-08-29 17:30:47.104541 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-08-29 17:30:47.104545 | orchestrator | Friday 29 August 2025 17:26:11 +0000 (0:00:00.536) 0:07:04.827 ********* 2025-08-29 17:30:47.104549 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.104552 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.104556 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.104560 | orchestrator | 2025-08-29 17:30:47.104564 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-08-29 17:30:47.104567 | orchestrator | Friday 29 August 2025 17:26:11 +0000 (0:00:00.315) 0:07:05.143 ********* 2025-08-29 17:30:47.104571 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 17:30:47.104576 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 17:30:47.104580 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 17:30:47.104584 | orchestrator | 2025-08-29 17:30:47.104588 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-08-29 17:30:47.104592 | orchestrator | Friday 29 August 2025 17:26:12 +0000 (0:00:00.889) 0:07:06.032 ********* 2025-08-29 17:30:47.104595 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.104599 | orchestrator | 2025-08-29 17:30:47.104603 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-08-29 17:30:47.104606 | orchestrator | Friday 29 August 2025 17:26:13 +0000 (0:00:00.837) 0:07:06.870 ********* 2025-08-29 17:30:47.104610 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.104614 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.104617 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.104621 | orchestrator | 2025-08-29 17:30:47.104625 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-08-29 17:30:47.104630 | orchestrator | Friday 29 August 2025 17:26:13 +0000 (0:00:00.349) 0:07:07.219 ********* 2025-08-29 17:30:47.104634 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.104638 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.104642 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.104645 | orchestrator | 2025-08-29 17:30:47.104649 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-08-29 17:30:47.104656 | orchestrator | Friday 29 August 2025 17:26:13 +0000 (0:00:00.309) 0:07:07.529 ********* 2025-08-29 17:30:47.104660 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.104664 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.104668 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.104671 | orchestrator | 2025-08-29 17:30:47.104675 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-08-29 17:30:47.104679 | orchestrator | Friday 29 August 2025 17:26:14 +0000 (0:00:00.919) 0:07:08.448 ********* 2025-08-29 17:30:47.104682 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.104686 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.104690 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.104693 | orchestrator | 2025-08-29 17:30:47.104697 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-08-29 17:30:47.104701 | orchestrator | Friday 29 August 2025 17:26:15 +0000 (0:00:00.354) 0:07:08.802 ********* 2025-08-29 17:30:47.104704 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 17:30:47.104708 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 17:30:47.104712 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 17:30:47.104716 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 17:30:47.104720 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 17:30:47.104723 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 17:30:47.104727 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 17:30:47.104731 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 17:30:47.104734 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 17:30:47.104738 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 17:30:47.104742 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 17:30:47.104745 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 17:30:47.104749 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 17:30:47.104753 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 17:30:47.104757 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 17:30:47.104760 | orchestrator | 2025-08-29 17:30:47.104764 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-08-29 17:30:47.104768 | orchestrator | Friday 29 August 2025 17:26:17 +0000 (0:00:02.046) 0:07:10.849 ********* 2025-08-29 17:30:47.104771 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.104775 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.104779 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.104783 | orchestrator | 2025-08-29 17:30:47.104786 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-08-29 17:30:47.104790 | orchestrator | Friday 29 August 2025 17:26:17 +0000 (0:00:00.302) 0:07:11.152 ********* 2025-08-29 17:30:47.104794 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.104798 | orchestrator | 2025-08-29 17:30:47.104801 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-08-29 17:30:47.104805 | orchestrator | Friday 29 August 2025 17:26:18 +0000 (0:00:00.905) 0:07:12.057 ********* 2025-08-29 17:30:47.104809 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 17:30:47.104812 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 17:30:47.104818 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 17:30:47.104825 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-08-29 17:30:47.104828 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-08-29 17:30:47.104832 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-08-29 17:30:47.104836 | orchestrator | 2025-08-29 17:30:47.104839 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-08-29 17:30:47.104843 | orchestrator | Friday 29 August 2025 17:26:19 +0000 (0:00:00.960) 0:07:13.018 ********* 2025-08-29 17:30:47.104847 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:30:47.104851 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 17:30:47.104854 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 17:30:47.104858 | orchestrator | 2025-08-29 17:30:47.104862 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-08-29 17:30:47.104865 | orchestrator | Friday 29 August 2025 17:26:21 +0000 (0:00:01.896) 0:07:14.914 ********* 2025-08-29 17:30:47.104869 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 17:30:47.104873 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 17:30:47.104877 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.104883 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 17:30:47.104886 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 17:30:47.104890 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.104894 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 17:30:47.104898 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 17:30:47.104901 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.104905 | orchestrator | 2025-08-29 17:30:47.104909 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-08-29 17:30:47.104912 | orchestrator | Friday 29 August 2025 17:26:22 +0000 (0:00:01.457) 0:07:16.371 ********* 2025-08-29 17:30:47.104916 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:30:47.104920 | orchestrator | 2025-08-29 17:30:47.104924 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-08-29 17:30:47.104927 | orchestrator | Friday 29 August 2025 17:26:24 +0000 (0:00:01.935) 0:07:18.307 ********* 2025-08-29 17:30:47.104931 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.104935 | orchestrator | 2025-08-29 17:30:47.104938 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-08-29 17:30:47.104942 | orchestrator | Friday 29 August 2025 17:26:25 +0000 (0:00:00.574) 0:07:18.882 ********* 2025-08-29 17:30:47.104946 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7cc16d54-75e9-5c21-b21a-878ce6efb3d6', 'data_vg': 'ceph-7cc16d54-75e9-5c21-b21a-878ce6efb3d6'}) 2025-08-29 17:30:47.104950 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b00dade2-f82b-53af-89a3-8c9250354ec6', 'data_vg': 'ceph-b00dade2-f82b-53af-89a3-8c9250354ec6'}) 2025-08-29 17:30:47.104954 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a4c19265-6381-5c6d-bd77-cfabc91aafa2', 'data_vg': 'ceph-a4c19265-6381-5c6d-bd77-cfabc91aafa2'}) 2025-08-29 17:30:47.104958 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-53dd44b5-7849-5101-9e2a-fd90ac927c8f', 'data_vg': 'ceph-53dd44b5-7849-5101-9e2a-fd90ac927c8f'}) 2025-08-29 17:30:47.104962 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8088253a-7e26-529d-8fdb-0f472c9bb5d3', 'data_vg': 'ceph-8088253a-7e26-529d-8fdb-0f472c9bb5d3'}) 2025-08-29 17:30:47.104965 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591', 'data_vg': 'ceph-b12c38cd-5c6b-5ee1-93c6-dbb5afb60591'}) 2025-08-29 17:30:47.104969 | orchestrator | 2025-08-29 17:30:47.104973 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-08-29 17:30:47.104979 | orchestrator | Friday 29 August 2025 17:27:09 +0000 (0:00:43.996) 0:08:02.878 ********* 2025-08-29 17:30:47.104983 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.104986 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.104990 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.104994 | orchestrator | 2025-08-29 17:30:47.104997 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-08-29 17:30:47.105001 | orchestrator | Friday 29 August 2025 17:27:09 +0000 (0:00:00.602) 0:08:03.481 ********* 2025-08-29 17:30:47.105005 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.105009 | orchestrator | 2025-08-29 17:30:47.105012 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-08-29 17:30:47.105016 | orchestrator | Friday 29 August 2025 17:27:10 +0000 (0:00:00.577) 0:08:04.058 ********* 2025-08-29 17:30:47.105020 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.105023 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.105027 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.105031 | orchestrator | 2025-08-29 17:30:47.105035 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-08-29 17:30:47.105038 | orchestrator | Friday 29 August 2025 17:27:11 +0000 (0:00:00.641) 0:08:04.700 ********* 2025-08-29 17:30:47.105042 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.105046 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.105049 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.105053 | orchestrator | 2025-08-29 17:30:47.105057 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-08-29 17:30:47.105060 | orchestrator | Friday 29 August 2025 17:27:13 +0000 (0:00:02.802) 0:08:07.502 ********* 2025-08-29 17:30:47.105064 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.105068 | orchestrator | 2025-08-29 17:30:47.105073 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-08-29 17:30:47.105077 | orchestrator | Friday 29 August 2025 17:27:14 +0000 (0:00:00.551) 0:08:08.054 ********* 2025-08-29 17:30:47.105081 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.105085 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.105088 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.105092 | orchestrator | 2025-08-29 17:30:47.105096 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-08-29 17:30:47.105100 | orchestrator | Friday 29 August 2025 17:27:15 +0000 (0:00:01.138) 0:08:09.193 ********* 2025-08-29 17:30:47.105103 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.105107 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.105111 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.105114 | orchestrator | 2025-08-29 17:30:47.105118 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-08-29 17:30:47.105122 | orchestrator | Friday 29 August 2025 17:27:16 +0000 (0:00:01.424) 0:08:10.618 ********* 2025-08-29 17:30:47.105125 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.105129 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.105133 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.105137 | orchestrator | 2025-08-29 17:30:47.105142 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-08-29 17:30:47.105146 | orchestrator | Friday 29 August 2025 17:27:18 +0000 (0:00:01.687) 0:08:12.306 ********* 2025-08-29 17:30:47.105149 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105153 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.105157 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.105161 | orchestrator | 2025-08-29 17:30:47.105164 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-08-29 17:30:47.105168 | orchestrator | Friday 29 August 2025 17:27:19 +0000 (0:00:00.416) 0:08:12.722 ********* 2025-08-29 17:30:47.105174 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105178 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.105181 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.105185 | orchestrator | 2025-08-29 17:30:47.105189 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-08-29 17:30:47.105193 | orchestrator | Friday 29 August 2025 17:27:19 +0000 (0:00:00.339) 0:08:13.062 ********* 2025-08-29 17:30:47.105196 | orchestrator | ok: [testbed-node-3] => (item=2) 2025-08-29 17:30:47.105200 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-08-29 17:30:47.105204 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-08-29 17:30:47.105207 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-08-29 17:30:47.105211 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 17:30:47.105215 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-08-29 17:30:47.105218 | orchestrator | 2025-08-29 17:30:47.105222 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-08-29 17:30:47.105226 | orchestrator | Friday 29 August 2025 17:27:20 +0000 (0:00:01.473) 0:08:14.535 ********* 2025-08-29 17:30:47.105230 | orchestrator | changed: [testbed-node-3] => (item=2) 2025-08-29 17:30:47.105233 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-08-29 17:30:47.105237 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-08-29 17:30:47.105241 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-08-29 17:30:47.105244 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-08-29 17:30:47.105248 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-08-29 17:30:47.105252 | orchestrator | 2025-08-29 17:30:47.105256 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-08-29 17:30:47.105259 | orchestrator | Friday 29 August 2025 17:27:23 +0000 (0:00:02.162) 0:08:16.697 ********* 2025-08-29 17:30:47.105263 | orchestrator | changed: [testbed-node-3] => (item=2) 2025-08-29 17:30:47.105267 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-08-29 17:30:47.105270 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-08-29 17:30:47.105274 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-08-29 17:30:47.105278 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-08-29 17:30:47.105282 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-08-29 17:30:47.105285 | orchestrator | 2025-08-29 17:30:47.105289 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-08-29 17:30:47.105293 | orchestrator | Friday 29 August 2025 17:27:27 +0000 (0:00:04.370) 0:08:21.068 ********* 2025-08-29 17:30:47.105296 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105300 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.105304 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:30:47.105307 | orchestrator | 2025-08-29 17:30:47.105311 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-08-29 17:30:47.105315 | orchestrator | Friday 29 August 2025 17:27:30 +0000 (0:00:02.942) 0:08:24.010 ********* 2025-08-29 17:30:47.105319 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105322 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.105326 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-08-29 17:30:47.105330 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:30:47.105334 | orchestrator | 2025-08-29 17:30:47.105337 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-08-29 17:30:47.105341 | orchestrator | Friday 29 August 2025 17:27:43 +0000 (0:00:13.121) 0:08:37.132 ********* 2025-08-29 17:30:47.105352 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105355 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.105359 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.105363 | orchestrator | 2025-08-29 17:30:47.105367 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 17:30:47.105370 | orchestrator | Friday 29 August 2025 17:27:44 +0000 (0:00:00.887) 0:08:38.020 ********* 2025-08-29 17:30:47.105377 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105380 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.105384 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.105389 | orchestrator | 2025-08-29 17:30:47.105396 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-08-29 17:30:47.105402 | orchestrator | Friday 29 August 2025 17:27:45 +0000 (0:00:00.633) 0:08:38.653 ********* 2025-08-29 17:30:47.105406 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.105409 | orchestrator | 2025-08-29 17:30:47.105413 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-08-29 17:30:47.105417 | orchestrator | Friday 29 August 2025 17:27:45 +0000 (0:00:00.613) 0:08:39.267 ********* 2025-08-29 17:30:47.105421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:30:47.105424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:30:47.105428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:30:47.105432 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105435 | orchestrator | 2025-08-29 17:30:47.105439 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-08-29 17:30:47.105443 | orchestrator | Friday 29 August 2025 17:27:46 +0000 (0:00:00.440) 0:08:39.707 ********* 2025-08-29 17:30:47.105446 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105450 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.105456 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.105460 | orchestrator | 2025-08-29 17:30:47.105463 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-08-29 17:30:47.105467 | orchestrator | Friday 29 August 2025 17:27:46 +0000 (0:00:00.305) 0:08:40.013 ********* 2025-08-29 17:30:47.105471 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105474 | orchestrator | 2025-08-29 17:30:47.105478 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-08-29 17:30:47.105482 | orchestrator | Friday 29 August 2025 17:27:46 +0000 (0:00:00.242) 0:08:40.256 ********* 2025-08-29 17:30:47.105485 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105489 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.105493 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.105496 | orchestrator | 2025-08-29 17:30:47.105500 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-08-29 17:30:47.105504 | orchestrator | Friday 29 August 2025 17:27:47 +0000 (0:00:00.582) 0:08:40.838 ********* 2025-08-29 17:30:47.105507 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105511 | orchestrator | 2025-08-29 17:30:47.105515 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-08-29 17:30:47.105519 | orchestrator | Friday 29 August 2025 17:27:47 +0000 (0:00:00.223) 0:08:41.061 ********* 2025-08-29 17:30:47.105522 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105526 | orchestrator | 2025-08-29 17:30:47.105530 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-08-29 17:30:47.105533 | orchestrator | Friday 29 August 2025 17:27:47 +0000 (0:00:00.246) 0:08:41.307 ********* 2025-08-29 17:30:47.105537 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105541 | orchestrator | 2025-08-29 17:30:47.105544 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-08-29 17:30:47.105548 | orchestrator | Friday 29 August 2025 17:27:47 +0000 (0:00:00.154) 0:08:41.462 ********* 2025-08-29 17:30:47.105552 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105555 | orchestrator | 2025-08-29 17:30:47.105559 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-08-29 17:30:47.105563 | orchestrator | Friday 29 August 2025 17:27:48 +0000 (0:00:00.237) 0:08:41.699 ********* 2025-08-29 17:30:47.105567 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105570 | orchestrator | 2025-08-29 17:30:47.105574 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-08-29 17:30:47.105580 | orchestrator | Friday 29 August 2025 17:27:48 +0000 (0:00:00.245) 0:08:41.945 ********* 2025-08-29 17:30:47.105584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:30:47.105587 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:30:47.105591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:30:47.105595 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105599 | orchestrator | 2025-08-29 17:30:47.105602 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-08-29 17:30:47.105606 | orchestrator | Friday 29 August 2025 17:27:48 +0000 (0:00:00.426) 0:08:42.371 ********* 2025-08-29 17:30:47.105610 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105614 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.105617 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.105621 | orchestrator | 2025-08-29 17:30:47.105625 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-08-29 17:30:47.105629 | orchestrator | Friday 29 August 2025 17:27:49 +0000 (0:00:00.342) 0:08:42.713 ********* 2025-08-29 17:30:47.105632 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105636 | orchestrator | 2025-08-29 17:30:47.105640 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-08-29 17:30:47.105644 | orchestrator | Friday 29 August 2025 17:27:49 +0000 (0:00:00.873) 0:08:43.587 ********* 2025-08-29 17:30:47.105647 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105651 | orchestrator | 2025-08-29 17:30:47.105655 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-08-29 17:30:47.105659 | orchestrator | 2025-08-29 17:30:47.105662 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 17:30:47.105666 | orchestrator | Friday 29 August 2025 17:27:50 +0000 (0:00:00.718) 0:08:44.306 ********* 2025-08-29 17:30:47.105670 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.105673 | orchestrator | 2025-08-29 17:30:47.105677 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 17:30:47.105681 | orchestrator | Friday 29 August 2025 17:27:52 +0000 (0:00:01.379) 0:08:45.686 ********* 2025-08-29 17:30:47.105686 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-1, testbed-node-2, testbed-node-0 2025-08-29 17:30:47.105690 | orchestrator | 2025-08-29 17:30:47.105694 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 17:30:47.105698 | orchestrator | Friday 29 August 2025 17:27:53 +0000 (0:00:01.381) 0:08:47.068 ********* 2025-08-29 17:30:47.105701 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105705 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.105709 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.105713 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.105716 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.105720 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.105724 | orchestrator | 2025-08-29 17:30:47.105727 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 17:30:47.105731 | orchestrator | Friday 29 August 2025 17:27:54 +0000 (0:00:01.348) 0:08:48.416 ********* 2025-08-29 17:30:47.105735 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.105739 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.105742 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.105746 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.105750 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.105753 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.105757 | orchestrator | 2025-08-29 17:30:47.105764 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 17:30:47.105770 | orchestrator | Friday 29 August 2025 17:27:55 +0000 (0:00:00.772) 0:08:49.188 ********* 2025-08-29 17:30:47.105774 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.105777 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.105781 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.105785 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.105788 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.105792 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.105796 | orchestrator | 2025-08-29 17:30:47.105799 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 17:30:47.105803 | orchestrator | Friday 29 August 2025 17:27:56 +0000 (0:00:01.102) 0:08:50.290 ********* 2025-08-29 17:30:47.105807 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.105811 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.105814 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.105818 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.105822 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.105825 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.105829 | orchestrator | 2025-08-29 17:30:47.105833 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 17:30:47.105837 | orchestrator | Friday 29 August 2025 17:27:57 +0000 (0:00:00.733) 0:08:51.024 ********* 2025-08-29 17:30:47.105840 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105844 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.105848 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.105852 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.105855 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.105859 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.105863 | orchestrator | 2025-08-29 17:30:47.105866 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 17:30:47.105870 | orchestrator | Friday 29 August 2025 17:27:58 +0000 (0:00:01.229) 0:08:52.254 ********* 2025-08-29 17:30:47.105874 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105878 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.105881 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.105885 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.105889 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.105892 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.105896 | orchestrator | 2025-08-29 17:30:47.105900 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 17:30:47.105904 | orchestrator | Friday 29 August 2025 17:27:59 +0000 (0:00:00.634) 0:08:52.888 ********* 2025-08-29 17:30:47.105907 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.105911 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.105915 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.105918 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.105922 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.105926 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.105929 | orchestrator | 2025-08-29 17:30:47.105933 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 17:30:47.105937 | orchestrator | Friday 29 August 2025 17:27:59 +0000 (0:00:00.661) 0:08:53.549 ********* 2025-08-29 17:30:47.105941 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.105944 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.105948 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.105952 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.105955 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.105959 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.105963 | orchestrator | 2025-08-29 17:30:47.105966 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 17:30:47.105970 | orchestrator | Friday 29 August 2025 17:28:01 +0000 (0:00:01.580) 0:08:55.130 ********* 2025-08-29 17:30:47.105974 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.105977 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.105981 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.105987 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.105991 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.105994 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.105998 | orchestrator | 2025-08-29 17:30:47.106002 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 17:30:47.106006 | orchestrator | Friday 29 August 2025 17:28:02 +0000 (0:00:01.147) 0:08:56.278 ********* 2025-08-29 17:30:47.106009 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.106024 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.106028 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.106032 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.106036 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.106040 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.106043 | orchestrator | 2025-08-29 17:30:47.106047 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 17:30:47.106051 | orchestrator | Friday 29 August 2025 17:28:03 +0000 (0:00:01.072) 0:08:57.350 ********* 2025-08-29 17:30:47.106054 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.106058 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.106064 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.106068 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.106072 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.106075 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.106079 | orchestrator | 2025-08-29 17:30:47.106083 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 17:30:47.106086 | orchestrator | Friday 29 August 2025 17:28:04 +0000 (0:00:00.682) 0:08:58.032 ********* 2025-08-29 17:30:47.106090 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.106094 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.106097 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.106101 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.106105 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.106109 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.106112 | orchestrator | 2025-08-29 17:30:47.106116 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 17:30:47.106120 | orchestrator | Friday 29 August 2025 17:28:05 +0000 (0:00:01.034) 0:08:59.067 ********* 2025-08-29 17:30:47.106123 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.106127 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.106131 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.106134 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.106138 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.106144 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.106147 | orchestrator | 2025-08-29 17:30:47.106151 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 17:30:47.106155 | orchestrator | Friday 29 August 2025 17:28:06 +0000 (0:00:00.658) 0:08:59.726 ********* 2025-08-29 17:30:47.106159 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.106162 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.106166 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.106170 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.106173 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.106177 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.106181 | orchestrator | 2025-08-29 17:30:47.106184 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 17:30:47.106188 | orchestrator | Friday 29 August 2025 17:28:07 +0000 (0:00:01.009) 0:09:00.735 ********* 2025-08-29 17:30:47.106192 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.106195 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.106199 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.106203 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.106207 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.106210 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.106214 | orchestrator | 2025-08-29 17:30:47.106218 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 17:30:47.106224 | orchestrator | Friday 29 August 2025 17:28:07 +0000 (0:00:00.652) 0:09:01.387 ********* 2025-08-29 17:30:47.106227 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.106231 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.106235 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.106238 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:47.106242 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:47.106246 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:47.106249 | orchestrator | 2025-08-29 17:30:47.106253 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 17:30:47.106257 | orchestrator | Friday 29 August 2025 17:28:08 +0000 (0:00:00.874) 0:09:02.262 ********* 2025-08-29 17:30:47.106261 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.106264 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.106268 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.106272 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.106275 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.106279 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.106283 | orchestrator | 2025-08-29 17:30:47.106286 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 17:30:47.106290 | orchestrator | Friday 29 August 2025 17:28:09 +0000 (0:00:00.645) 0:09:02.907 ********* 2025-08-29 17:30:47.106294 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.106297 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.106301 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.106305 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.106308 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.106312 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.106316 | orchestrator | 2025-08-29 17:30:47.106319 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 17:30:47.106323 | orchestrator | Friday 29 August 2025 17:28:10 +0000 (0:00:00.998) 0:09:03.906 ********* 2025-08-29 17:30:47.106327 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.106330 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.106334 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.106338 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.106349 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.106353 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.106356 | orchestrator | 2025-08-29 17:30:47.106360 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-08-29 17:30:47.106364 | orchestrator | Friday 29 August 2025 17:28:11 +0000 (0:00:01.380) 0:09:05.287 ********* 2025-08-29 17:30:47.106368 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:30:47.106371 | orchestrator | 2025-08-29 17:30:47.106375 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-08-29 17:30:47.106379 | orchestrator | Friday 29 August 2025 17:28:16 +0000 (0:00:04.665) 0:09:09.952 ********* 2025-08-29 17:30:47.106383 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:30:47.106386 | orchestrator | 2025-08-29 17:30:47.106390 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-08-29 17:30:47.106394 | orchestrator | Friday 29 August 2025 17:28:18 +0000 (0:00:02.127) 0:09:12.080 ********* 2025-08-29 17:30:47.106398 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.106401 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.106405 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.106409 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.106413 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.106416 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.106420 | orchestrator | 2025-08-29 17:30:47.106424 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-08-29 17:30:47.106427 | orchestrator | Friday 29 August 2025 17:28:19 +0000 (0:00:01.561) 0:09:13.641 ********* 2025-08-29 17:30:47.106433 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.106439 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.106443 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.106447 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.106450 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.106454 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.106458 | orchestrator | 2025-08-29 17:30:47.106462 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-08-29 17:30:47.106465 | orchestrator | Friday 29 August 2025 17:28:21 +0000 (0:00:01.543) 0:09:15.185 ********* 2025-08-29 17:30:47.106470 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-2, testbed-node-1 2025-08-29 17:30:47.106477 | orchestrator | 2025-08-29 17:30:47.106481 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-08-29 17:30:47.106485 | orchestrator | Friday 29 August 2025 17:28:23 +0000 (0:00:01.616) 0:09:16.801 ********* 2025-08-29 17:30:47.106489 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.106492 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.106496 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.106502 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.106506 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.106509 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.106513 | orchestrator | 2025-08-29 17:30:47.106517 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-08-29 17:30:47.106520 | orchestrator | Friday 29 August 2025 17:28:24 +0000 (0:00:01.731) 0:09:18.532 ********* 2025-08-29 17:30:47.106524 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.106528 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.106531 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.106535 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.106539 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.106542 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.106546 | orchestrator | 2025-08-29 17:30:47.106550 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-08-29 17:30:47.106554 | orchestrator | Friday 29 August 2025 17:28:28 +0000 (0:00:03.655) 0:09:22.188 ********* 2025-08-29 17:30:47.106557 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:30:47.106561 | orchestrator | 2025-08-29 17:30:47.106565 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-08-29 17:30:47.106569 | orchestrator | Friday 29 August 2025 17:28:29 +0000 (0:00:01.342) 0:09:23.531 ********* 2025-08-29 17:30:47.106572 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.106576 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.106580 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.106583 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.106587 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.106591 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.106595 | orchestrator | 2025-08-29 17:30:47.106598 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-08-29 17:30:47.106602 | orchestrator | Friday 29 August 2025 17:28:30 +0000 (0:00:00.781) 0:09:24.313 ********* 2025-08-29 17:30:47.106606 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.106610 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.106613 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.106617 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:47.106621 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:47.106624 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:47.106628 | orchestrator | 2025-08-29 17:30:47.106632 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-08-29 17:30:47.106635 | orchestrator | Friday 29 August 2025 17:28:33 +0000 (0:00:02.815) 0:09:27.129 ********* 2025-08-29 17:30:47.106639 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.106645 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.106649 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.106653 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:47.106656 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:47.106660 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:47.106664 | orchestrator | 2025-08-29 17:30:47.106667 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-08-29 17:30:47.106671 | orchestrator | 2025-08-29 17:30:47.106675 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 17:30:47.106679 | orchestrator | Friday 29 August 2025 17:28:34 +0000 (0:00:00.997) 0:09:28.126 ********* 2025-08-29 17:30:47.106682 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.106686 | orchestrator | 2025-08-29 17:30:47.106690 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 17:30:47.106694 | orchestrator | Friday 29 August 2025 17:28:35 +0000 (0:00:00.876) 0:09:29.002 ********* 2025-08-29 17:30:47.106697 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.106701 | orchestrator | 2025-08-29 17:30:47.106705 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 17:30:47.106709 | orchestrator | Friday 29 August 2025 17:28:35 +0000 (0:00:00.603) 0:09:29.606 ********* 2025-08-29 17:30:47.106712 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.106716 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.106720 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.106723 | orchestrator | 2025-08-29 17:30:47.106727 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 17:30:47.106731 | orchestrator | Friday 29 August 2025 17:28:36 +0000 (0:00:00.588) 0:09:30.195 ********* 2025-08-29 17:30:47.106735 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.106738 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.106742 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.106746 | orchestrator | 2025-08-29 17:30:47.106750 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 17:30:47.106755 | orchestrator | Friday 29 August 2025 17:28:37 +0000 (0:00:00.772) 0:09:30.967 ********* 2025-08-29 17:30:47.106759 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.106763 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.106766 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.106770 | orchestrator | 2025-08-29 17:30:47.106774 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 17:30:47.106778 | orchestrator | Friday 29 August 2025 17:28:38 +0000 (0:00:00.858) 0:09:31.826 ********* 2025-08-29 17:30:47.106781 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.106785 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.106789 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.106793 | orchestrator | 2025-08-29 17:30:47.106796 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 17:30:47.106800 | orchestrator | Friday 29 August 2025 17:28:38 +0000 (0:00:00.782) 0:09:32.608 ********* 2025-08-29 17:30:47.106804 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.106807 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.106811 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.106815 | orchestrator | 2025-08-29 17:30:47.106819 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 17:30:47.106822 | orchestrator | Friday 29 August 2025 17:28:39 +0000 (0:00:00.671) 0:09:33.280 ********* 2025-08-29 17:30:47.106829 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.106833 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.106836 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.106840 | orchestrator | 2025-08-29 17:30:47.106844 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 17:30:47.106848 | orchestrator | Friday 29 August 2025 17:28:39 +0000 (0:00:00.352) 0:09:33.632 ********* 2025-08-29 17:30:47.106855 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.106858 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.106862 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.106866 | orchestrator | 2025-08-29 17:30:47.106870 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 17:30:47.106873 | orchestrator | Friday 29 August 2025 17:28:40 +0000 (0:00:00.440) 0:09:34.073 ********* 2025-08-29 17:30:47.106877 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.106881 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.106885 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.106888 | orchestrator | 2025-08-29 17:30:47.106892 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 17:30:47.106896 | orchestrator | Friday 29 August 2025 17:28:41 +0000 (0:00:00.759) 0:09:34.832 ********* 2025-08-29 17:30:47.106900 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.106903 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.106907 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.106911 | orchestrator | 2025-08-29 17:30:47.106915 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 17:30:47.106918 | orchestrator | Friday 29 August 2025 17:28:42 +0000 (0:00:01.099) 0:09:35.933 ********* 2025-08-29 17:30:47.106922 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.106926 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.106930 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.106933 | orchestrator | 2025-08-29 17:30:47.106937 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 17:30:47.106941 | orchestrator | Friday 29 August 2025 17:28:42 +0000 (0:00:00.374) 0:09:36.307 ********* 2025-08-29 17:30:47.106945 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.106948 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.106952 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.106956 | orchestrator | 2025-08-29 17:30:47.106960 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 17:30:47.106963 | orchestrator | Friday 29 August 2025 17:28:43 +0000 (0:00:00.345) 0:09:36.652 ********* 2025-08-29 17:30:47.106967 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.106971 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.106975 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.106978 | orchestrator | 2025-08-29 17:30:47.106982 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 17:30:47.106986 | orchestrator | Friday 29 August 2025 17:28:43 +0000 (0:00:00.438) 0:09:37.090 ********* 2025-08-29 17:30:47.106990 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.106993 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.106997 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.107001 | orchestrator | 2025-08-29 17:30:47.107004 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 17:30:47.107008 | orchestrator | Friday 29 August 2025 17:28:44 +0000 (0:00:00.749) 0:09:37.839 ********* 2025-08-29 17:30:47.107012 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.107016 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.107019 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.107023 | orchestrator | 2025-08-29 17:30:47.107027 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 17:30:47.107031 | orchestrator | Friday 29 August 2025 17:28:44 +0000 (0:00:00.379) 0:09:38.219 ********* 2025-08-29 17:30:47.107034 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.107038 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.107042 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.107046 | orchestrator | 2025-08-29 17:30:47.107049 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 17:30:47.107053 | orchestrator | Friday 29 August 2025 17:28:44 +0000 (0:00:00.382) 0:09:38.602 ********* 2025-08-29 17:30:47.107057 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.107063 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.107067 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.107071 | orchestrator | 2025-08-29 17:30:47.107074 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 17:30:47.107078 | orchestrator | Friday 29 August 2025 17:28:45 +0000 (0:00:00.392) 0:09:38.994 ********* 2025-08-29 17:30:47.107082 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.107086 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.107090 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.107093 | orchestrator | 2025-08-29 17:30:47.107097 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 17:30:47.107101 | orchestrator | Friday 29 August 2025 17:28:46 +0000 (0:00:00.688) 0:09:39.682 ********* 2025-08-29 17:30:47.107105 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.107110 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.107114 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.107118 | orchestrator | 2025-08-29 17:30:47.107121 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 17:30:47.107125 | orchestrator | Friday 29 August 2025 17:28:46 +0000 (0:00:00.406) 0:09:40.088 ********* 2025-08-29 17:30:47.107129 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.107133 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.107136 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.107140 | orchestrator | 2025-08-29 17:30:47.107144 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-08-29 17:30:47.107148 | orchestrator | Friday 29 August 2025 17:28:47 +0000 (0:00:00.709) 0:09:40.797 ********* 2025-08-29 17:30:47.107151 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.107155 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.107159 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-08-29 17:30:47.107163 | orchestrator | 2025-08-29 17:30:47.107166 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-08-29 17:30:47.107170 | orchestrator | Friday 29 August 2025 17:28:47 +0000 (0:00:00.796) 0:09:41.594 ********* 2025-08-29 17:30:47.107176 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:30:47.107180 | orchestrator | 2025-08-29 17:30:47.107184 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-08-29 17:30:47.107188 | orchestrator | Friday 29 August 2025 17:28:50 +0000 (0:00:02.251) 0:09:43.846 ********* 2025-08-29 17:30:47.107192 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-08-29 17:30:47.107197 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.107200 | orchestrator | 2025-08-29 17:30:47.107204 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-08-29 17:30:47.107208 | orchestrator | Friday 29 August 2025 17:28:50 +0000 (0:00:00.271) 0:09:44.117 ********* 2025-08-29 17:30:47.107212 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 17:30:47.107219 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 17:30:47.107223 | orchestrator | 2025-08-29 17:30:47.107227 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-08-29 17:30:47.107230 | orchestrator | Friday 29 August 2025 17:28:58 +0000 (0:00:08.009) 0:09:52.127 ********* 2025-08-29 17:30:47.107234 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:30:47.107240 | orchestrator | 2025-08-29 17:30:47.107244 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-08-29 17:30:47.107248 | orchestrator | Friday 29 August 2025 17:29:02 +0000 (0:00:04.084) 0:09:56.211 ********* 2025-08-29 17:30:47.107252 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.107256 | orchestrator | 2025-08-29 17:30:47.107259 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-08-29 17:30:47.107263 | orchestrator | Friday 29 August 2025 17:29:03 +0000 (0:00:00.879) 0:09:57.091 ********* 2025-08-29 17:30:47.107267 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 17:30:47.107271 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 17:30:47.107274 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 17:30:47.107278 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-08-29 17:30:47.107282 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-08-29 17:30:47.107286 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-08-29 17:30:47.107289 | orchestrator | 2025-08-29 17:30:47.107293 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-08-29 17:30:47.107297 | orchestrator | Friday 29 August 2025 17:29:04 +0000 (0:00:01.140) 0:09:58.232 ********* 2025-08-29 17:30:47.107301 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:30:47.107305 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 17:30:47.107308 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 17:30:47.107312 | orchestrator | 2025-08-29 17:30:47.107316 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-08-29 17:30:47.107320 | orchestrator | Friday 29 August 2025 17:29:06 +0000 (0:00:02.377) 0:10:00.609 ********* 2025-08-29 17:30:47.107323 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 17:30:47.107327 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 17:30:47.107331 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.107335 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 17:30:47.107338 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 17:30:47.107361 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.107365 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 17:30:47.107371 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 17:30:47.107375 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.107379 | orchestrator | 2025-08-29 17:30:47.107382 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-08-29 17:30:47.107386 | orchestrator | Friday 29 August 2025 17:29:08 +0000 (0:00:01.302) 0:10:01.911 ********* 2025-08-29 17:30:47.107390 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.107394 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.107397 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.107401 | orchestrator | 2025-08-29 17:30:47.107405 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-08-29 17:30:47.107409 | orchestrator | Friday 29 August 2025 17:29:11 +0000 (0:00:03.216) 0:10:05.128 ********* 2025-08-29 17:30:47.107412 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.107416 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.107420 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.107423 | orchestrator | 2025-08-29 17:30:47.107427 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-08-29 17:30:47.107431 | orchestrator | Friday 29 August 2025 17:29:11 +0000 (0:00:00.390) 0:10:05.518 ********* 2025-08-29 17:30:47.107437 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.107443 | orchestrator | 2025-08-29 17:30:47.107447 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-08-29 17:30:47.107451 | orchestrator | Friday 29 August 2025 17:29:12 +0000 (0:00:00.624) 0:10:06.142 ********* 2025-08-29 17:30:47.107455 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.107458 | orchestrator | 2025-08-29 17:30:47.107462 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-08-29 17:30:47.107466 | orchestrator | Friday 29 August 2025 17:29:13 +0000 (0:00:00.966) 0:10:07.109 ********* 2025-08-29 17:30:47.107470 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.107473 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.107477 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.107481 | orchestrator | 2025-08-29 17:30:47.107484 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-08-29 17:30:47.107488 | orchestrator | Friday 29 August 2025 17:29:15 +0000 (0:00:01.554) 0:10:08.664 ********* 2025-08-29 17:30:47.107492 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.107496 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.107499 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.107503 | orchestrator | 2025-08-29 17:30:47.107507 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-08-29 17:30:47.107510 | orchestrator | Friday 29 August 2025 17:29:16 +0000 (0:00:01.384) 0:10:10.049 ********* 2025-08-29 17:30:47.107514 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.107518 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.107522 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.107525 | orchestrator | 2025-08-29 17:30:47.107529 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-08-29 17:30:47.107533 | orchestrator | Friday 29 August 2025 17:29:18 +0000 (0:00:01.766) 0:10:11.815 ********* 2025-08-29 17:30:47.107537 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.107540 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.107544 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.107548 | orchestrator | 2025-08-29 17:30:47.107551 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-08-29 17:30:47.107555 | orchestrator | Friday 29 August 2025 17:29:20 +0000 (0:00:02.433) 0:10:14.249 ********* 2025-08-29 17:30:47.107559 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.107563 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.107566 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.107570 | orchestrator | 2025-08-29 17:30:47.107574 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 17:30:47.107578 | orchestrator | Friday 29 August 2025 17:29:22 +0000 (0:00:01.634) 0:10:15.883 ********* 2025-08-29 17:30:47.107581 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.107585 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.107589 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.107593 | orchestrator | 2025-08-29 17:30:47.107596 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-08-29 17:30:47.107600 | orchestrator | Friday 29 August 2025 17:29:23 +0000 (0:00:01.134) 0:10:17.017 ********* 2025-08-29 17:30:47.107607 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.107612 | orchestrator | 2025-08-29 17:30:47.107615 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-08-29 17:30:47.107619 | orchestrator | Friday 29 August 2025 17:29:24 +0000 (0:00:00.679) 0:10:17.697 ********* 2025-08-29 17:30:47.107623 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.107627 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.107630 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.107634 | orchestrator | 2025-08-29 17:30:47.107638 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-08-29 17:30:47.107642 | orchestrator | Friday 29 August 2025 17:29:24 +0000 (0:00:00.404) 0:10:18.101 ********* 2025-08-29 17:30:47.107649 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.107653 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.107657 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.107660 | orchestrator | 2025-08-29 17:30:47.107664 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-08-29 17:30:47.107668 | orchestrator | Friday 29 August 2025 17:29:26 +0000 (0:00:01.687) 0:10:19.789 ********* 2025-08-29 17:30:47.107672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:30:47.107675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:30:47.107679 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:30:47.107683 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.107686 | orchestrator | 2025-08-29 17:30:47.107690 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-08-29 17:30:47.107696 | orchestrator | Friday 29 August 2025 17:29:26 +0000 (0:00:00.810) 0:10:20.599 ********* 2025-08-29 17:30:47.107700 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.107703 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.107707 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.107711 | orchestrator | 2025-08-29 17:30:47.107715 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-08-29 17:30:47.107718 | orchestrator | 2025-08-29 17:30:47.107722 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 17:30:47.107726 | orchestrator | Friday 29 August 2025 17:29:28 +0000 (0:00:01.072) 0:10:21.671 ********* 2025-08-29 17:30:47.107730 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.107733 | orchestrator | 2025-08-29 17:30:47.107737 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 17:30:47.107741 | orchestrator | Friday 29 August 2025 17:29:29 +0000 (0:00:01.037) 0:10:22.709 ********* 2025-08-29 17:30:47.107746 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.107750 | orchestrator | 2025-08-29 17:30:47.107754 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 17:30:47.107758 | orchestrator | Friday 29 August 2025 17:29:29 +0000 (0:00:00.573) 0:10:23.282 ********* 2025-08-29 17:30:47.107762 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.107765 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.107769 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.107773 | orchestrator | 2025-08-29 17:30:47.107776 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 17:30:47.107780 | orchestrator | Friday 29 August 2025 17:29:30 +0000 (0:00:00.715) 0:10:23.998 ********* 2025-08-29 17:30:47.107784 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.107787 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.107791 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.107795 | orchestrator | 2025-08-29 17:30:47.107798 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 17:30:47.107802 | orchestrator | Friday 29 August 2025 17:29:31 +0000 (0:00:01.009) 0:10:25.007 ********* 2025-08-29 17:30:47.107806 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.107810 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.107813 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.107817 | orchestrator | 2025-08-29 17:30:47.107821 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 17:30:47.107824 | orchestrator | Friday 29 August 2025 17:29:32 +0000 (0:00:01.003) 0:10:26.011 ********* 2025-08-29 17:30:47.107828 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.107832 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.107835 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.107839 | orchestrator | 2025-08-29 17:30:47.107843 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 17:30:47.107849 | orchestrator | Friday 29 August 2025 17:29:33 +0000 (0:00:01.032) 0:10:27.043 ********* 2025-08-29 17:30:47.107853 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.107856 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.107860 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.107864 | orchestrator | 2025-08-29 17:30:47.107868 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 17:30:47.107871 | orchestrator | Friday 29 August 2025 17:29:34 +0000 (0:00:00.732) 0:10:27.775 ********* 2025-08-29 17:30:47.107875 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.107879 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.107882 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.107886 | orchestrator | 2025-08-29 17:30:47.107890 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 17:30:47.107893 | orchestrator | Friday 29 August 2025 17:29:34 +0000 (0:00:00.372) 0:10:28.147 ********* 2025-08-29 17:30:47.107897 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.107901 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.107904 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.107908 | orchestrator | 2025-08-29 17:30:47.107912 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 17:30:47.107916 | orchestrator | Friday 29 August 2025 17:29:34 +0000 (0:00:00.416) 0:10:28.564 ********* 2025-08-29 17:30:47.107919 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.107923 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.107927 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.107930 | orchestrator | 2025-08-29 17:30:47.107934 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 17:30:47.107938 | orchestrator | Friday 29 August 2025 17:29:35 +0000 (0:00:00.747) 0:10:29.311 ********* 2025-08-29 17:30:47.107941 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.107945 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.107949 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.107952 | orchestrator | 2025-08-29 17:30:47.107956 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 17:30:47.107960 | orchestrator | Friday 29 August 2025 17:29:36 +0000 (0:00:01.134) 0:10:30.445 ********* 2025-08-29 17:30:47.107963 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.107967 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.107971 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.107974 | orchestrator | 2025-08-29 17:30:47.107978 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 17:30:47.107982 | orchestrator | Friday 29 August 2025 17:29:37 +0000 (0:00:00.366) 0:10:30.812 ********* 2025-08-29 17:30:47.107986 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.107989 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.107993 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.107996 | orchestrator | 2025-08-29 17:30:47.108000 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 17:30:47.108004 | orchestrator | Friday 29 August 2025 17:29:37 +0000 (0:00:00.337) 0:10:31.150 ********* 2025-08-29 17:30:47.108008 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.108011 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.108015 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.108019 | orchestrator | 2025-08-29 17:30:47.108028 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 17:30:47.108032 | orchestrator | Friday 29 August 2025 17:29:37 +0000 (0:00:00.387) 0:10:31.538 ********* 2025-08-29 17:30:47.108036 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.108040 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.108044 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.108047 | orchestrator | 2025-08-29 17:30:47.108051 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 17:30:47.108055 | orchestrator | Friday 29 August 2025 17:29:38 +0000 (0:00:00.713) 0:10:32.251 ********* 2025-08-29 17:30:47.108090 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.108094 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.108098 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.108101 | orchestrator | 2025-08-29 17:30:47.108105 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 17:30:47.108109 | orchestrator | Friday 29 August 2025 17:29:39 +0000 (0:00:00.436) 0:10:32.687 ********* 2025-08-29 17:30:47.108112 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.108116 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.108120 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.108124 | orchestrator | 2025-08-29 17:30:47.108129 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 17:30:47.108133 | orchestrator | Friday 29 August 2025 17:29:39 +0000 (0:00:00.342) 0:10:33.029 ********* 2025-08-29 17:30:47.108137 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.108140 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.108144 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.108148 | orchestrator | 2025-08-29 17:30:47.108152 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 17:30:47.108155 | orchestrator | Friday 29 August 2025 17:29:39 +0000 (0:00:00.334) 0:10:33.364 ********* 2025-08-29 17:30:47.108159 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.108163 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.108167 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.108170 | orchestrator | 2025-08-29 17:30:47.108174 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 17:30:47.108178 | orchestrator | Friday 29 August 2025 17:29:40 +0000 (0:00:00.631) 0:10:33.996 ********* 2025-08-29 17:30:47.108181 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.108185 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.108189 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.108193 | orchestrator | 2025-08-29 17:30:47.108196 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 17:30:47.108200 | orchestrator | Friday 29 August 2025 17:29:40 +0000 (0:00:00.401) 0:10:34.397 ********* 2025-08-29 17:30:47.108204 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.108207 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.108211 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.108215 | orchestrator | 2025-08-29 17:30:47.108219 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-08-29 17:30:47.108222 | orchestrator | Friday 29 August 2025 17:29:41 +0000 (0:00:00.629) 0:10:35.026 ********* 2025-08-29 17:30:47.108226 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.108230 | orchestrator | 2025-08-29 17:30:47.108234 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-08-29 17:30:47.108237 | orchestrator | Friday 29 August 2025 17:29:42 +0000 (0:00:00.931) 0:10:35.958 ********* 2025-08-29 17:30:47.108241 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:30:47.108245 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 17:30:47.108249 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 17:30:47.108252 | orchestrator | 2025-08-29 17:30:47.108256 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-08-29 17:30:47.108260 | orchestrator | Friday 29 August 2025 17:29:44 +0000 (0:00:02.340) 0:10:38.298 ********* 2025-08-29 17:30:47.108263 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 17:30:47.108267 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 17:30:47.108271 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.108275 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 17:30:47.108278 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 17:30:47.108282 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.108289 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 17:30:47.108293 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 17:30:47.108296 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.108300 | orchestrator | 2025-08-29 17:30:47.108304 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-08-29 17:30:47.108308 | orchestrator | Friday 29 August 2025 17:29:45 +0000 (0:00:01.273) 0:10:39.572 ********* 2025-08-29 17:30:47.108311 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.108315 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.108319 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.108322 | orchestrator | 2025-08-29 17:30:47.108326 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-08-29 17:30:47.108330 | orchestrator | Friday 29 August 2025 17:29:46 +0000 (0:00:00.410) 0:10:39.983 ********* 2025-08-29 17:30:47.108334 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.108337 | orchestrator | 2025-08-29 17:30:47.108341 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-08-29 17:30:47.108353 | orchestrator | Friday 29 August 2025 17:29:47 +0000 (0:00:01.219) 0:10:41.202 ********* 2025-08-29 17:30:47.108357 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 17:30:47.108363 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 17:30:47.108367 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 17:30:47.108371 | orchestrator | 2025-08-29 17:30:47.108375 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-08-29 17:30:47.108378 | orchestrator | Friday 29 August 2025 17:29:48 +0000 (0:00:01.185) 0:10:42.387 ********* 2025-08-29 17:30:47.108382 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:30:47.108386 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 17:30:47.108390 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:30:47.108394 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 17:30:47.108399 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:30:47.108403 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 17:30:47.108407 | orchestrator | 2025-08-29 17:30:47.108411 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-08-29 17:30:47.108414 | orchestrator | Friday 29 August 2025 17:29:53 +0000 (0:00:04.794) 0:10:47.182 ********* 2025-08-29 17:30:47.108418 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:30:47.108422 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 17:30:47.108426 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:30:47.108429 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 17:30:47.108433 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:30:47.108437 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 17:30:47.108441 | orchestrator | 2025-08-29 17:30:47.108444 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-08-29 17:30:47.108448 | orchestrator | Friday 29 August 2025 17:29:56 +0000 (0:00:02.871) 0:10:50.054 ********* 2025-08-29 17:30:47.108454 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 17:30:47.108458 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.108462 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 17:30:47.108465 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.108469 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 17:30:47.108473 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.108476 | orchestrator | 2025-08-29 17:30:47.108480 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-08-29 17:30:47.108484 | orchestrator | Friday 29 August 2025 17:29:57 +0000 (0:00:01.516) 0:10:51.571 ********* 2025-08-29 17:30:47.108488 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-08-29 17:30:47.108492 | orchestrator | 2025-08-29 17:30:47.108495 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-08-29 17:30:47.108499 | orchestrator | Friday 29 August 2025 17:29:58 +0000 (0:00:00.254) 0:10:51.825 ********* 2025-08-29 17:30:47.108503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:30:47.108507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:30:47.108511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:30:47.108514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:30:47.108518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:30:47.108522 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.108526 | orchestrator | 2025-08-29 17:30:47.108529 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-08-29 17:30:47.108533 | orchestrator | Friday 29 August 2025 17:29:58 +0000 (0:00:00.731) 0:10:52.556 ********* 2025-08-29 17:30:47.108537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:30:47.108541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:30:47.108545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:30:47.108548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:30:47.108552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:30:47.108557 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.108561 | orchestrator | 2025-08-29 17:30:47.108565 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-08-29 17:30:47.108569 | orchestrator | Friday 29 August 2025 17:29:59 +0000 (0:00:00.748) 0:10:53.305 ********* 2025-08-29 17:30:47.108573 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 17:30:47.108576 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 17:30:47.108580 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 17:30:47.108586 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 17:30:47.108593 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 17:30:47.108596 | orchestrator | 2025-08-29 17:30:47.108600 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-08-29 17:30:47.108604 | orchestrator | Friday 29 August 2025 17:30:32 +0000 (0:00:32.642) 0:11:25.947 ********* 2025-08-29 17:30:47.108608 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.108611 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.108615 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.108619 | orchestrator | 2025-08-29 17:30:47.108623 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-08-29 17:30:47.108626 | orchestrator | Friday 29 August 2025 17:30:32 +0000 (0:00:00.321) 0:11:26.269 ********* 2025-08-29 17:30:47.108630 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.108634 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.108637 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.108641 | orchestrator | 2025-08-29 17:30:47.108645 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-08-29 17:30:47.108649 | orchestrator | Friday 29 August 2025 17:30:33 +0000 (0:00:00.505) 0:11:26.775 ********* 2025-08-29 17:30:47.108652 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.108656 | orchestrator | 2025-08-29 17:30:47.108660 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-08-29 17:30:47.108664 | orchestrator | Friday 29 August 2025 17:30:33 +0000 (0:00:00.571) 0:11:27.346 ********* 2025-08-29 17:30:47.108668 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.108671 | orchestrator | 2025-08-29 17:30:47.108675 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-08-29 17:30:47.108679 | orchestrator | Friday 29 August 2025 17:30:34 +0000 (0:00:00.857) 0:11:28.204 ********* 2025-08-29 17:30:47.108682 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.108686 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.108690 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.108694 | orchestrator | 2025-08-29 17:30:47.108697 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-08-29 17:30:47.108701 | orchestrator | Friday 29 August 2025 17:30:35 +0000 (0:00:01.429) 0:11:29.634 ********* 2025-08-29 17:30:47.108705 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.108709 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.108712 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.108716 | orchestrator | 2025-08-29 17:30:47.108720 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-08-29 17:30:47.108724 | orchestrator | Friday 29 August 2025 17:30:37 +0000 (0:00:01.247) 0:11:30.882 ********* 2025-08-29 17:30:47.108727 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:47.108731 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:47.108735 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:47.108739 | orchestrator | 2025-08-29 17:30:47.108742 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-08-29 17:30:47.108746 | orchestrator | Friday 29 August 2025 17:30:39 +0000 (0:00:01.790) 0:11:32.672 ********* 2025-08-29 17:30:47.108750 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 17:30:47.108754 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 17:30:47.108757 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 17:30:47.108763 | orchestrator | 2025-08-29 17:30:47.108767 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 17:30:47.108771 | orchestrator | Friday 29 August 2025 17:30:41 +0000 (0:00:02.790) 0:11:35.462 ********* 2025-08-29 17:30:47.108775 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.108778 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.108782 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.108786 | orchestrator | 2025-08-29 17:30:47.108790 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-08-29 17:30:47.108793 | orchestrator | Friday 29 August 2025 17:30:42 +0000 (0:00:00.474) 0:11:35.937 ********* 2025-08-29 17:30:47.108799 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:47.108803 | orchestrator | 2025-08-29 17:30:47.108806 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-08-29 17:30:47.108810 | orchestrator | Friday 29 August 2025 17:30:43 +0000 (0:00:00.929) 0:11:36.867 ********* 2025-08-29 17:30:47.108814 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.108818 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.108821 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.108825 | orchestrator | 2025-08-29 17:30:47.108829 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-08-29 17:30:47.108833 | orchestrator | Friday 29 August 2025 17:30:43 +0000 (0:00:00.358) 0:11:37.225 ********* 2025-08-29 17:30:47.108837 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.108840 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:47.108844 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:47.108848 | orchestrator | 2025-08-29 17:30:47.108851 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-08-29 17:30:47.108855 | orchestrator | Friday 29 August 2025 17:30:43 +0000 (0:00:00.304) 0:11:37.529 ********* 2025-08-29 17:30:47.108859 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:30:47.108864 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:30:47.108868 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:30:47.108872 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:47.108876 | orchestrator | 2025-08-29 17:30:47.108879 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-08-29 17:30:47.108883 | orchestrator | Friday 29 August 2025 17:30:44 +0000 (0:00:00.906) 0:11:38.435 ********* 2025-08-29 17:30:47.108887 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:47.108891 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:47.108894 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:47.108898 | orchestrator | 2025-08-29 17:30:47.108902 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:30:47.108906 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-08-29 17:30:47.108910 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-08-29 17:30:47.108913 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-08-29 17:30:47.108917 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-08-29 17:30:47.108921 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-08-29 17:30:47.108925 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-08-29 17:30:47.108929 | orchestrator | 2025-08-29 17:30:47.108935 | orchestrator | 2025-08-29 17:30:47.108939 | orchestrator | 2025-08-29 17:30:47.108942 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:30:47.108946 | orchestrator | Friday 29 August 2025 17:30:45 +0000 (0:00:00.215) 0:11:38.651 ********* 2025-08-29 17:30:47.108950 | orchestrator | =============================================================================== 2025-08-29 17:30:47.108954 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 73.59s 2025-08-29 17:30:47.108957 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 44.00s 2025-08-29 17:30:47.108961 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.64s 2025-08-29 17:30:47.108965 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.76s 2025-08-29 17:30:47.108969 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.71s 2025-08-29 17:30:47.108972 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.12s 2025-08-29 17:30:47.108976 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.50s 2025-08-29 17:30:47.108980 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.87s 2025-08-29 17:30:47.108984 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.01s 2025-08-29 17:30:47.108987 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.44s 2025-08-29 17:30:47.108991 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.24s 2025-08-29 17:30:47.108995 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 5.82s 2025-08-29 17:30:47.108998 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.00s 2025-08-29 17:30:47.109002 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.79s 2025-08-29 17:30:47.109006 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.67s 2025-08-29 17:30:47.109009 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.37s 2025-08-29 17:30:47.109013 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.08s 2025-08-29 17:30:47.109017 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.66s 2025-08-29 17:30:47.109021 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.39s 2025-08-29 17:30:47.109024 | orchestrator | ceph-mds : Create mds keyring ------------------------------------------- 3.22s 2025-08-29 17:30:47.109030 | orchestrator | 2025-08-29 17:30:47 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:47.109033 | orchestrator | 2025-08-29 17:30:47 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:47.109037 | orchestrator | 2025-08-29 17:30:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:50.144543 | orchestrator | 2025-08-29 17:30:50 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:30:50.145695 | orchestrator | 2025-08-29 17:30:50 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:50.147778 | orchestrator | 2025-08-29 17:30:50 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:50.147813 | orchestrator | 2025-08-29 17:30:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:53.188401 | orchestrator | 2025-08-29 17:30:53 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:30:53.188653 | orchestrator | 2025-08-29 17:30:53 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:53.190860 | orchestrator | 2025-08-29 17:30:53 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:53.191443 | orchestrator | 2025-08-29 17:30:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:56.245948 | orchestrator | 2025-08-29 17:30:56 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:30:56.247973 | orchestrator | 2025-08-29 17:30:56 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:56.250265 | orchestrator | 2025-08-29 17:30:56 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:56.250629 | orchestrator | 2025-08-29 17:30:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:30:59.297269 | orchestrator | 2025-08-29 17:30:59 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:30:59.297414 | orchestrator | 2025-08-29 17:30:59 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:30:59.300267 | orchestrator | 2025-08-29 17:30:59 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:30:59.300294 | orchestrator | 2025-08-29 17:30:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:02.351723 | orchestrator | 2025-08-29 17:31:02 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:02.354152 | orchestrator | 2025-08-29 17:31:02 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:02.355779 | orchestrator | 2025-08-29 17:31:02 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:02.355800 | orchestrator | 2025-08-29 17:31:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:05.404240 | orchestrator | 2025-08-29 17:31:05 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:05.404326 | orchestrator | 2025-08-29 17:31:05 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:05.406323 | orchestrator | 2025-08-29 17:31:05 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:05.406367 | orchestrator | 2025-08-29 17:31:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:08.458637 | orchestrator | 2025-08-29 17:31:08 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:08.460108 | orchestrator | 2025-08-29 17:31:08 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:08.463584 | orchestrator | 2025-08-29 17:31:08 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:08.463624 | orchestrator | 2025-08-29 17:31:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:11.505091 | orchestrator | 2025-08-29 17:31:11 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:11.506070 | orchestrator | 2025-08-29 17:31:11 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:11.508392 | orchestrator | 2025-08-29 17:31:11 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:11.508401 | orchestrator | 2025-08-29 17:31:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:14.553734 | orchestrator | 2025-08-29 17:31:14 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:14.555804 | orchestrator | 2025-08-29 17:31:14 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:14.557098 | orchestrator | 2025-08-29 17:31:14 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:14.557184 | orchestrator | 2025-08-29 17:31:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:17.591182 | orchestrator | 2025-08-29 17:31:17 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:17.591828 | orchestrator | 2025-08-29 17:31:17 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:17.592581 | orchestrator | 2025-08-29 17:31:17 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:17.592599 | orchestrator | 2025-08-29 17:31:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:20.649663 | orchestrator | 2025-08-29 17:31:20 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:20.650694 | orchestrator | 2025-08-29 17:31:20 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:20.651895 | orchestrator | 2025-08-29 17:31:20 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:20.651936 | orchestrator | 2025-08-29 17:31:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:23.695149 | orchestrator | 2025-08-29 17:31:23 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:23.696284 | orchestrator | 2025-08-29 17:31:23 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:23.697693 | orchestrator | 2025-08-29 17:31:23 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:23.698698 | orchestrator | 2025-08-29 17:31:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:26.762831 | orchestrator | 2025-08-29 17:31:26 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:26.764810 | orchestrator | 2025-08-29 17:31:26 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:26.766803 | orchestrator | 2025-08-29 17:31:26 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:26.766865 | orchestrator | 2025-08-29 17:31:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:29.807310 | orchestrator | 2025-08-29 17:31:29 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:29.809324 | orchestrator | 2025-08-29 17:31:29 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:29.811564 | orchestrator | 2025-08-29 17:31:29 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:29.811945 | orchestrator | 2025-08-29 17:31:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:32.861170 | orchestrator | 2025-08-29 17:31:32 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:32.864186 | orchestrator | 2025-08-29 17:31:32 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:32.866317 | orchestrator | 2025-08-29 17:31:32 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:32.866403 | orchestrator | 2025-08-29 17:31:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:35.911561 | orchestrator | 2025-08-29 17:31:35 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:35.913479 | orchestrator | 2025-08-29 17:31:35 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:35.915034 | orchestrator | 2025-08-29 17:31:35 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:35.915064 | orchestrator | 2025-08-29 17:31:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:38.964669 | orchestrator | 2025-08-29 17:31:38 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:38.965494 | orchestrator | 2025-08-29 17:31:38 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:38.966627 | orchestrator | 2025-08-29 17:31:38 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:38.966842 | orchestrator | 2025-08-29 17:31:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:42.015077 | orchestrator | 2025-08-29 17:31:42 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:42.016914 | orchestrator | 2025-08-29 17:31:42 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:42.018087 | orchestrator | 2025-08-29 17:31:42 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:42.018121 | orchestrator | 2025-08-29 17:31:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:45.054555 | orchestrator | 2025-08-29 17:31:45 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:45.055318 | orchestrator | 2025-08-29 17:31:45 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:45.056551 | orchestrator | 2025-08-29 17:31:45 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:45.056574 | orchestrator | 2025-08-29 17:31:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:48.106803 | orchestrator | 2025-08-29 17:31:48 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:48.108010 | orchestrator | 2025-08-29 17:31:48 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:48.109441 | orchestrator | 2025-08-29 17:31:48 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:48.109513 | orchestrator | 2025-08-29 17:31:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:51.158403 | orchestrator | 2025-08-29 17:31:51 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:51.159173 | orchestrator | 2025-08-29 17:31:51 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:51.160733 | orchestrator | 2025-08-29 17:31:51 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:51.160795 | orchestrator | 2025-08-29 17:31:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:54.205030 | orchestrator | 2025-08-29 17:31:54 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:54.207004 | orchestrator | 2025-08-29 17:31:54 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:54.209173 | orchestrator | 2025-08-29 17:31:54 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:54.209211 | orchestrator | 2025-08-29 17:31:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:31:57.250320 | orchestrator | 2025-08-29 17:31:57 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:31:57.250866 | orchestrator | 2025-08-29 17:31:57 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:31:57.252723 | orchestrator | 2025-08-29 17:31:57 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:31:57.252745 | orchestrator | 2025-08-29 17:31:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:00.296258 | orchestrator | 2025-08-29 17:32:00 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:00.297470 | orchestrator | 2025-08-29 17:32:00 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:32:00.299055 | orchestrator | 2025-08-29 17:32:00 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:32:00.299324 | orchestrator | 2025-08-29 17:32:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:03.342385 | orchestrator | 2025-08-29 17:32:03 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:03.348712 | orchestrator | 2025-08-29 17:32:03 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:32:03.351695 | orchestrator | 2025-08-29 17:32:03 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:32:03.352168 | orchestrator | 2025-08-29 17:32:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:06.402959 | orchestrator | 2025-08-29 17:32:06 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:06.404437 | orchestrator | 2025-08-29 17:32:06 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:32:06.405762 | orchestrator | 2025-08-29 17:32:06 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:32:06.405863 | orchestrator | 2025-08-29 17:32:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:09.456841 | orchestrator | 2025-08-29 17:32:09 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:09.458913 | orchestrator | 2025-08-29 17:32:09 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:32:09.461308 | orchestrator | 2025-08-29 17:32:09 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:32:09.461507 | orchestrator | 2025-08-29 17:32:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:12.517643 | orchestrator | 2025-08-29 17:32:12 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:12.518826 | orchestrator | 2025-08-29 17:32:12 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:32:12.520501 | orchestrator | 2025-08-29 17:32:12 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:32:12.520511 | orchestrator | 2025-08-29 17:32:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:15.561314 | orchestrator | 2025-08-29 17:32:15 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:15.564601 | orchestrator | 2025-08-29 17:32:15 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:32:15.566886 | orchestrator | 2025-08-29 17:32:15 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:32:15.566903 | orchestrator | 2025-08-29 17:32:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:18.611890 | orchestrator | 2025-08-29 17:32:18 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:18.614063 | orchestrator | 2025-08-29 17:32:18 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:32:18.616210 | orchestrator | 2025-08-29 17:32:18 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:32:18.616236 | orchestrator | 2025-08-29 17:32:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:21.667449 | orchestrator | 2025-08-29 17:32:21 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:21.668525 | orchestrator | 2025-08-29 17:32:21 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:32:21.670088 | orchestrator | 2025-08-29 17:32:21 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:32:21.670190 | orchestrator | 2025-08-29 17:32:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:24.715019 | orchestrator | 2025-08-29 17:32:24 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:24.717166 | orchestrator | 2025-08-29 17:32:24 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:32:24.719148 | orchestrator | 2025-08-29 17:32:24 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:32:24.719187 | orchestrator | 2025-08-29 17:32:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:27.761679 | orchestrator | 2025-08-29 17:32:27 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:27.762227 | orchestrator | 2025-08-29 17:32:27 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:32:27.763810 | orchestrator | 2025-08-29 17:32:27 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:32:27.764162 | orchestrator | 2025-08-29 17:32:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:30.807827 | orchestrator | 2025-08-29 17:32:30 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:30.809580 | orchestrator | 2025-08-29 17:32:30 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:32:30.811571 | orchestrator | 2025-08-29 17:32:30 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:32:30.811599 | orchestrator | 2025-08-29 17:32:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:33.868564 | orchestrator | 2025-08-29 17:32:33 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:33.869909 | orchestrator | 2025-08-29 17:32:33 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state STARTED 2025-08-29 17:32:33.872552 | orchestrator | 2025-08-29 17:32:33 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state STARTED 2025-08-29 17:32:33.872579 | orchestrator | 2025-08-29 17:32:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:36.922110 | orchestrator | 2025-08-29 17:32:36 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:36.926285 | orchestrator | 2025-08-29 17:32:36.926335 | orchestrator | 2025-08-29 17:32:36.926348 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:32:36.926416 | orchestrator | 2025-08-29 17:32:36.926428 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:32:36.926440 | orchestrator | Friday 29 August 2025 17:28:54 +0000 (0:00:00.314) 0:00:00.314 ********* 2025-08-29 17:32:36.926451 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:36.926464 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:36.926475 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:36.926486 | orchestrator | 2025-08-29 17:32:36.926497 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:32:36.926508 | orchestrator | Friday 29 August 2025 17:28:55 +0000 (0:00:00.330) 0:00:00.644 ********* 2025-08-29 17:32:36.926520 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-08-29 17:32:36.926531 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-08-29 17:32:36.926542 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-08-29 17:32:36.926552 | orchestrator | 2025-08-29 17:32:36.926563 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-08-29 17:32:36.926574 | orchestrator | 2025-08-29 17:32:36.926601 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 17:32:36.926637 | orchestrator | Friday 29 August 2025 17:28:55 +0000 (0:00:00.519) 0:00:01.163 ********* 2025-08-29 17:32:36.926649 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:32:36.926660 | orchestrator | 2025-08-29 17:32:36.926671 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-08-29 17:32:36.926682 | orchestrator | Friday 29 August 2025 17:28:56 +0000 (0:00:00.566) 0:00:01.729 ********* 2025-08-29 17:32:36.926693 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 17:32:36.926818 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 17:32:36.926830 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 17:32:36.926841 | orchestrator | 2025-08-29 17:32:36.926852 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-08-29 17:32:36.926864 | orchestrator | Friday 29 August 2025 17:28:57 +0000 (0:00:01.674) 0:00:03.404 ********* 2025-08-29 17:32:36.926879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:32:36.926896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:32:36.926920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:32:36.926942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:32:36.926967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:32:36.926981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:32:36.926993 | orchestrator | 2025-08-29 17:32:36.927004 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 17:32:36.927016 | orchestrator | Friday 29 August 2025 17:28:59 +0000 (0:00:02.091) 0:00:05.495 ********* 2025-08-29 17:32:36.927026 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:32:36.927038 | orchestrator | 2025-08-29 17:32:36.927048 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-08-29 17:32:36.927059 | orchestrator | Friday 29 August 2025 17:29:00 +0000 (0:00:00.826) 0:00:06.321 ********* 2025-08-29 17:32:36.927081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:32:36.927106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:32:36.927118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:32:36.927130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:32:36.927149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:32:36.927173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:32:36.927186 | orchestrator | 2025-08-29 17:32:36.927197 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-08-29 17:32:36.927209 | orchestrator | Friday 29 August 2025 17:29:03 +0000 (0:00:03.084) 0:00:09.406 ********* 2025-08-29 17:32:36.927220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 17:32:36.927232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 17:32:36.927244 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:36.927256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 17:32:36.927288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 17:32:36.927555 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.927570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 17:32:36.927582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 17:32:36.928131 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.928149 | orchestrator | 2025-08-29 17:32:36.928161 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-08-29 17:32:36.928172 | orchestrator | Friday 29 August 2025 17:29:04 +0000 (0:00:01.172) 0:00:10.578 ********* 2025-08-29 17:32:36.928184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 17:32:36.928255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 17:32:36.928271 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:36.928283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 17:32:36.928295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 17:32:36.928307 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.928318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 17:32:36.928390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 17:32:36.928405 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.928416 | orchestrator | 2025-08-29 17:32:36.928433 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-08-29 17:32:36.928444 | orchestrator | Friday 29 August 2025 17:29:06 +0000 (0:00:01.161) 0:00:11.740 ********* 2025-08-29 17:32:36.928455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:32:36.928467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:32:36.928478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:32:36.928533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:32:36.928553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:32:36.928567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:32:36.928578 | orchestrator | 2025-08-29 17:32:36.928590 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-08-29 17:32:36.928600 | orchestrator | Friday 29 August 2025 17:29:09 +0000 (0:00:03.032) 0:00:14.773 ********* 2025-08-29 17:32:36.928611 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.928622 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:32:36.928633 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:36.928644 | orchestrator | 2025-08-29 17:32:36.928655 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-08-29 17:32:36.928666 | orchestrator | Friday 29 August 2025 17:29:12 +0000 (0:00:02.995) 0:00:17.768 ********* 2025-08-29 17:32:36.928688 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:32:36.928699 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.928710 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:36.928722 | orchestrator | 2025-08-29 17:32:36.928734 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-08-29 17:32:36.928747 | orchestrator | Friday 29 August 2025 17:29:15 +0000 (0:00:02.929) 0:00:20.697 ********* 2025-08-29 17:32:36.928759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:32:36.928801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'co2025-08-29 17:32:36 | INFO  | Task 37e6ac28-8ad3-44b4-b556-864cdd988461 is in state SUCCESS 2025-08-29 17:32:36.928822 | orchestrator | ntainer_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:32:36.928836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:32:36.928850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:32:36.928871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:32:36.928893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:32:36.928906 | orchestrator | 2025-08-29 17:32:36.928923 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 17:32:36.928936 | orchestrator | Friday 29 August 2025 17:29:17 +0000 (0:00:02.388) 0:00:23.086 ********* 2025-08-29 17:32:36.928948 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:36.928960 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.928971 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.928983 | orchestrator | 2025-08-29 17:32:36.928995 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 17:32:36.929007 | orchestrator | Friday 29 August 2025 17:29:17 +0000 (0:00:00.335) 0:00:23.421 ********* 2025-08-29 17:32:36.929020 | orchestrator | 2025-08-29 17:32:36.929032 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 17:32:36.929044 | orchestrator | Friday 29 August 2025 17:29:17 +0000 (0:00:00.078) 0:00:23.500 ********* 2025-08-29 17:32:36.929056 | orchestrator | 2025-08-29 17:32:36.929069 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 17:32:36.929080 | orchestrator | Friday 29 August 2025 17:29:17 +0000 (0:00:00.086) 0:00:23.586 ********* 2025-08-29 17:32:36.929090 | orchestrator | 2025-08-29 17:32:36.929101 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-08-29 17:32:36.929112 | orchestrator | Friday 29 August 2025 17:29:18 +0000 (0:00:00.085) 0:00:23.671 ********* 2025-08-29 17:32:36.929122 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:36.929133 | orchestrator | 2025-08-29 17:32:36.929144 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-08-29 17:32:36.929155 | orchestrator | Friday 29 August 2025 17:29:18 +0000 (0:00:00.224) 0:00:23.896 ********* 2025-08-29 17:32:36.929171 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:36.929182 | orchestrator | 2025-08-29 17:32:36.929194 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-08-29 17:32:36.929204 | orchestrator | Friday 29 August 2025 17:29:19 +0000 (0:00:00.871) 0:00:24.768 ********* 2025-08-29 17:32:36.929215 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.929226 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:32:36.929237 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:36.929247 | orchestrator | 2025-08-29 17:32:36.929258 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-08-29 17:32:36.929269 | orchestrator | Friday 29 August 2025 17:30:59 +0000 (0:01:40.076) 0:02:04.844 ********* 2025-08-29 17:32:36.929280 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.929291 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:36.929301 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:32:36.929312 | orchestrator | 2025-08-29 17:32:36.929323 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 17:32:36.929333 | orchestrator | Friday 29 August 2025 17:32:23 +0000 (0:01:24.670) 0:03:29.515 ********* 2025-08-29 17:32:36.929344 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:32:36.929371 | orchestrator | 2025-08-29 17:32:36.929382 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-08-29 17:32:36.929393 | orchestrator | Friday 29 August 2025 17:32:24 +0000 (0:00:00.531) 0:03:30.047 ********* 2025-08-29 17:32:36.929404 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:36.929414 | orchestrator | 2025-08-29 17:32:36.929425 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-08-29 17:32:36.929436 | orchestrator | Friday 29 August 2025 17:32:27 +0000 (0:00:02.726) 0:03:32.773 ********* 2025-08-29 17:32:36.929447 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:36.929457 | orchestrator | 2025-08-29 17:32:36.929468 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-08-29 17:32:36.929479 | orchestrator | Friday 29 August 2025 17:32:29 +0000 (0:00:02.409) 0:03:35.183 ********* 2025-08-29 17:32:36.929489 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.929500 | orchestrator | 2025-08-29 17:32:36.929511 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-08-29 17:32:36.929522 | orchestrator | Friday 29 August 2025 17:32:32 +0000 (0:00:02.581) 0:03:37.765 ********* 2025-08-29 17:32:36.929533 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.929543 | orchestrator | 2025-08-29 17:32:36.929554 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:32:36.929566 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 17:32:36.929578 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 17:32:36.929597 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 17:32:36.929608 | orchestrator | 2025-08-29 17:32:36.929619 | orchestrator | 2025-08-29 17:32:36.929630 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:32:36.929641 | orchestrator | Friday 29 August 2025 17:32:34 +0000 (0:00:02.411) 0:03:40.176 ********* 2025-08-29 17:32:36.929652 | orchestrator | =============================================================================== 2025-08-29 17:32:36.929662 | orchestrator | opensearch : Restart opensearch container ----------------------------- 100.08s 2025-08-29 17:32:36.929673 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 84.67s 2025-08-29 17:32:36.929684 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.08s 2025-08-29 17:32:36.929714 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.03s 2025-08-29 17:32:36.929725 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.00s 2025-08-29 17:32:36.929736 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.93s 2025-08-29 17:32:36.929752 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.73s 2025-08-29 17:32:36.929763 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.58s 2025-08-29 17:32:36.929773 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.41s 2025-08-29 17:32:36.929784 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.41s 2025-08-29 17:32:36.929795 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.39s 2025-08-29 17:32:36.929805 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.09s 2025-08-29 17:32:36.929816 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.67s 2025-08-29 17:32:36.929827 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.17s 2025-08-29 17:32:36.929838 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.16s 2025-08-29 17:32:36.929848 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.87s 2025-08-29 17:32:36.929859 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.83s 2025-08-29 17:32:36.929870 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2025-08-29 17:32:36.929880 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-08-29 17:32:36.929891 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2025-08-29 17:32:36.929902 | orchestrator | 2025-08-29 17:32:36 | INFO  | Task 2acb7d51-bda0-44fe-99a5-0782ab9bb4af is in state SUCCESS 2025-08-29 17:32:36.929913 | orchestrator | 2025-08-29 17:32:36.929923 | orchestrator | 2025-08-29 17:32:36.929934 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-08-29 17:32:36.929945 | orchestrator | 2025-08-29 17:32:36.929956 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-08-29 17:32:36.929966 | orchestrator | Friday 29 August 2025 17:28:54 +0000 (0:00:00.113) 0:00:00.113 ********* 2025-08-29 17:32:36.929977 | orchestrator | ok: [localhost] => { 2025-08-29 17:32:36.929988 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-08-29 17:32:36.929999 | orchestrator | } 2025-08-29 17:32:36.930011 | orchestrator | 2025-08-29 17:32:36.930059 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-08-29 17:32:36.930071 | orchestrator | Friday 29 August 2025 17:28:54 +0000 (0:00:00.044) 0:00:00.157 ********* 2025-08-29 17:32:36.930082 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-08-29 17:32:36.930093 | orchestrator | ...ignoring 2025-08-29 17:32:36.930104 | orchestrator | 2025-08-29 17:32:36.930115 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-08-29 17:32:36.930126 | orchestrator | Friday 29 August 2025 17:28:57 +0000 (0:00:02.950) 0:00:03.107 ********* 2025-08-29 17:32:36.930136 | orchestrator | skipping: [localhost] 2025-08-29 17:32:36.930147 | orchestrator | 2025-08-29 17:32:36.930157 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-08-29 17:32:36.930168 | orchestrator | Friday 29 August 2025 17:28:57 +0000 (0:00:00.062) 0:00:03.169 ********* 2025-08-29 17:32:36.930179 | orchestrator | ok: [localhost] 2025-08-29 17:32:36.930189 | orchestrator | 2025-08-29 17:32:36.930200 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:32:36.930211 | orchestrator | 2025-08-29 17:32:36.930222 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:32:36.930232 | orchestrator | Friday 29 August 2025 17:28:57 +0000 (0:00:00.166) 0:00:03.336 ********* 2025-08-29 17:32:36.930251 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:36.930262 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:36.930273 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:36.930283 | orchestrator | 2025-08-29 17:32:36.930294 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:32:36.930305 | orchestrator | Friday 29 August 2025 17:28:57 +0000 (0:00:00.325) 0:00:03.661 ********* 2025-08-29 17:32:36.930315 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-08-29 17:32:36.930326 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-08-29 17:32:36.930337 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-08-29 17:32:36.930348 | orchestrator | 2025-08-29 17:32:36.930388 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-08-29 17:32:36.930399 | orchestrator | 2025-08-29 17:32:36.930410 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-08-29 17:32:36.930421 | orchestrator | Friday 29 August 2025 17:28:58 +0000 (0:00:00.771) 0:00:04.433 ********* 2025-08-29 17:32:36.930439 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 17:32:36.930450 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 17:32:36.930461 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 17:32:36.930472 | orchestrator | 2025-08-29 17:32:36.930483 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 17:32:36.930494 | orchestrator | Friday 29 August 2025 17:28:59 +0000 (0:00:00.495) 0:00:04.928 ********* 2025-08-29 17:32:36.930504 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:32:36.930516 | orchestrator | 2025-08-29 17:32:36.930527 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-08-29 17:32:36.930538 | orchestrator | Friday 29 August 2025 17:28:59 +0000 (0:00:00.584) 0:00:05.513 ********* 2025-08-29 17:32:36.930555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 17:32:36.930621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 17:32:36.930649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 17:32:36.930662 | orchestrator | 2025-08-29 17:32:36.930673 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-08-29 17:32:36.930683 | orchestrator | Friday 29 August 2025 17:29:03 +0000 (0:00:04.095) 0:00:09.608 ********* 2025-08-29 17:32:36.930694 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.930705 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.930716 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.930726 | orchestrator | 2025-08-29 17:32:36.930737 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-08-29 17:32:36.930756 | orchestrator | Friday 29 August 2025 17:29:04 +0000 (0:00:00.808) 0:00:10.417 ********* 2025-08-29 17:32:36.930767 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.930778 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.930789 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.930799 | orchestrator | 2025-08-29 17:32:36.930810 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-08-29 17:32:36.930821 | orchestrator | Friday 29 August 2025 17:29:06 +0000 (0:00:01.544) 0:00:11.962 ********* 2025-08-29 17:32:36.930840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 17:32:36.930859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 17:32:36.930885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 17:32:36.930897 | orchestrator | 2025-08-29 17:32:36.930914 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-08-29 17:32:36.930925 | orchestrator | Friday 29 August 2025 17:29:10 +0000 (0:00:04.337) 0:00:16.299 ********* 2025-08-29 17:32:36.930935 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.930946 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.930957 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.930968 | orchestrator | 2025-08-29 17:32:36.930978 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-08-29 17:32:36.930989 | orchestrator | Friday 29 August 2025 17:29:11 +0000 (0:00:01.265) 0:00:17.564 ********* 2025-08-29 17:32:36.930999 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.931010 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:32:36.931021 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:36.931031 | orchestrator | 2025-08-29 17:32:36.931042 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 17:32:36.931052 | orchestrator | Friday 29 August 2025 17:29:17 +0000 (0:00:06.311) 0:00:23.876 ********* 2025-08-29 17:32:36.931063 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:32:36.931074 | orchestrator | 2025-08-29 17:32:36.931089 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-08-29 17:32:36.931100 | orchestrator | Friday 29 August 2025 17:29:18 +0000 (0:00:00.583) 0:00:24.459 ********* 2025-08-29 17:32:36.931112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:32:36.931130 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.931150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:32:36.931162 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:36.931179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:32:36.931197 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.931208 | orchestrator | 2025-08-29 17:32:36.931219 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-08-29 17:32:36.931229 | orchestrator | Friday 29 August 2025 17:29:23 +0000 (0:00:04.590) 0:00:29.050 ********* 2025-08-29 17:32:36.931248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:32:36.931260 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:36.931277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:32:36.931295 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.931306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:32:36.931318 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.931329 | orchestrator | 2025-08-29 17:32:36.931345 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-08-29 17:32:36.931373 | orchestrator | Friday 29 August 2025 17:29:26 +0000 (0:00:03.339) 0:00:32.389 ********* 2025-08-29 17:32:36.931390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:32:36.931409 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:36.931420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:32:36.931432 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.931457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:32:36.931475 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.931486 | orchestrator | 2025-08-29 17:32:36.931497 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-08-29 17:32:36.931508 | orchestrator | Friday 29 August 2025 17:29:30 +0000 (0:00:03.686) 0:00:36.075 ********* 2025-08-29 17:32:36.931519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 17:32:36.931545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 17:32:36.931565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 17:32:36.931577 | orchestrator | 2025-08-29 17:32:36.931588 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-08-29 17:32:36.931598 | orchestrator | Friday 29 August 2025 17:29:34 +0000 (0:00:04.444) 0:00:40.520 ********* 2025-08-29 17:32:36.931609 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.931620 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:32:36.931630 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:36.931641 | orchestrator | 2025-08-29 17:32:36.931652 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-08-29 17:32:36.931662 | orchestrator | Friday 29 August 2025 17:29:35 +0000 (0:00:00.970) 0:00:41.491 ********* 2025-08-29 17:32:36.931673 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:36.931684 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:36.931695 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:36.931705 | orchestrator | 2025-08-29 17:32:36.931716 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-08-29 17:32:36.931726 | orchestrator | Friday 29 August 2025 17:29:36 +0000 (0:00:00.650) 0:00:42.142 ********* 2025-08-29 17:32:36.931737 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:36.931748 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:36.931758 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:36.931769 | orchestrator | 2025-08-29 17:32:36.931780 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-08-29 17:32:36.931790 | orchestrator | Friday 29 August 2025 17:29:36 +0000 (0:00:00.411) 0:00:42.553 ********* 2025-08-29 17:32:36.931807 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-08-29 17:32:36.931824 | orchestrator | ...ignoring 2025-08-29 17:32:36.931835 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-08-29 17:32:36.931846 | orchestrator | ...ignoring 2025-08-29 17:32:36.931857 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-08-29 17:32:36.931868 | orchestrator | ...ignoring 2025-08-29 17:32:36.931879 | orchestrator | 2025-08-29 17:32:36.931889 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-08-29 17:32:36.931900 | orchestrator | Friday 29 August 2025 17:29:47 +0000 (0:00:11.021) 0:00:53.575 ********* 2025-08-29 17:32:36.931911 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:36.931921 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:36.931932 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:36.931942 | orchestrator | 2025-08-29 17:32:36.931958 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-08-29 17:32:36.931969 | orchestrator | Friday 29 August 2025 17:29:48 +0000 (0:00:00.765) 0:00:54.340 ********* 2025-08-29 17:32:36.931980 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:36.931990 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.932001 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.932012 | orchestrator | 2025-08-29 17:32:36.932022 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-08-29 17:32:36.932033 | orchestrator | Friday 29 August 2025 17:29:49 +0000 (0:00:00.919) 0:00:55.260 ********* 2025-08-29 17:32:36.932044 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:36.932054 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.932065 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.932076 | orchestrator | 2025-08-29 17:32:36.932087 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-08-29 17:32:36.932097 | orchestrator | Friday 29 August 2025 17:29:49 +0000 (0:00:00.517) 0:00:55.778 ********* 2025-08-29 17:32:36.932108 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:36.932119 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.932129 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.932140 | orchestrator | 2025-08-29 17:32:36.932151 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-08-29 17:32:36.932161 | orchestrator | Friday 29 August 2025 17:29:50 +0000 (0:00:00.577) 0:00:56.355 ********* 2025-08-29 17:32:36.932172 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:36.932183 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:36.932194 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:36.932204 | orchestrator | 2025-08-29 17:32:36.932215 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-08-29 17:32:36.932226 | orchestrator | Friday 29 August 2025 17:29:51 +0000 (0:00:00.541) 0:00:56.897 ********* 2025-08-29 17:32:36.932236 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:36.932247 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.932258 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.932268 | orchestrator | 2025-08-29 17:32:36.932279 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 17:32:36.932290 | orchestrator | Friday 29 August 2025 17:29:52 +0000 (0:00:01.019) 0:00:57.917 ********* 2025-08-29 17:32:36.932301 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.932311 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.932322 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-08-29 17:32:36.932333 | orchestrator | 2025-08-29 17:32:36.932344 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-08-29 17:32:36.932409 | orchestrator | Friday 29 August 2025 17:29:52 +0000 (0:00:00.424) 0:00:58.342 ********* 2025-08-29 17:32:36.932433 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.932444 | orchestrator | 2025-08-29 17:32:36.932455 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-08-29 17:32:36.932465 | orchestrator | Friday 29 August 2025 17:30:14 +0000 (0:00:21.834) 0:01:20.176 ********* 2025-08-29 17:32:36.932476 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:36.932486 | orchestrator | 2025-08-29 17:32:36.932497 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 17:32:36.932508 | orchestrator | Friday 29 August 2025 17:30:14 +0000 (0:00:00.158) 0:01:20.334 ********* 2025-08-29 17:32:36.932519 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:36.932529 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.932540 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.932551 | orchestrator | 2025-08-29 17:32:36.932561 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-08-29 17:32:36.932572 | orchestrator | Friday 29 August 2025 17:30:15 +0000 (0:00:01.112) 0:01:21.447 ********* 2025-08-29 17:32:36.932583 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.932593 | orchestrator | 2025-08-29 17:32:36.932604 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-08-29 17:32:36.932615 | orchestrator | Friday 29 August 2025 17:30:24 +0000 (0:00:08.798) 0:01:30.245 ********* 2025-08-29 17:32:36.932625 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:36.932636 | orchestrator | 2025-08-29 17:32:36.932647 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-08-29 17:32:36.932657 | orchestrator | Friday 29 August 2025 17:30:26 +0000 (0:00:01.757) 0:01:32.002 ********* 2025-08-29 17:32:36.932668 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:36.932679 | orchestrator | 2025-08-29 17:32:36.932689 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-08-29 17:32:36.932700 | orchestrator | Friday 29 August 2025 17:30:28 +0000 (0:00:02.760) 0:01:34.763 ********* 2025-08-29 17:32:36.932711 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.932722 | orchestrator | 2025-08-29 17:32:36.932732 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-08-29 17:32:36.932743 | orchestrator | Friday 29 August 2025 17:30:29 +0000 (0:00:00.161) 0:01:34.925 ********* 2025-08-29 17:32:36.932759 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:36.932770 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.932781 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.932792 | orchestrator | 2025-08-29 17:32:36.932802 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-08-29 17:32:36.932813 | orchestrator | Friday 29 August 2025 17:30:29 +0000 (0:00:00.345) 0:01:35.270 ********* 2025-08-29 17:32:36.932824 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:36.932835 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-08-29 17:32:36.932846 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:32:36.932857 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:36.932867 | orchestrator | 2025-08-29 17:32:36.932878 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-08-29 17:32:36.932889 | orchestrator | skipping: no hosts matched 2025-08-29 17:32:36.932899 | orchestrator | 2025-08-29 17:32:36.932910 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 17:32:36.932921 | orchestrator | 2025-08-29 17:32:36.932932 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 17:32:36.932947 | orchestrator | Friday 29 August 2025 17:30:29 +0000 (0:00:00.584) 0:01:35.855 ********* 2025-08-29 17:32:36.932958 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:32:36.932969 | orchestrator | 2025-08-29 17:32:36.932980 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 17:32:36.932991 | orchestrator | Friday 29 August 2025 17:30:51 +0000 (0:00:21.575) 0:01:57.431 ********* 2025-08-29 17:32:36.933001 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:36.933012 | orchestrator | 2025-08-29 17:32:36.933027 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 17:32:36.933038 | orchestrator | Friday 29 August 2025 17:31:13 +0000 (0:00:21.616) 0:02:19.047 ********* 2025-08-29 17:32:36.933048 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:36.933059 | orchestrator | 2025-08-29 17:32:36.933069 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 17:32:36.933080 | orchestrator | 2025-08-29 17:32:36.933091 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 17:32:36.933101 | orchestrator | Friday 29 August 2025 17:31:15 +0000 (0:00:02.489) 0:02:21.537 ********* 2025-08-29 17:32:36.933112 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:36.933122 | orchestrator | 2025-08-29 17:32:36.933133 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 17:32:36.933144 | orchestrator | Friday 29 August 2025 17:31:42 +0000 (0:00:26.985) 0:02:48.522 ********* 2025-08-29 17:32:36.933155 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:36.933165 | orchestrator | 2025-08-29 17:32:36.933176 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 17:32:36.933187 | orchestrator | Friday 29 August 2025 17:31:58 +0000 (0:00:15.547) 0:03:04.070 ********* 2025-08-29 17:32:36.933198 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:36.933209 | orchestrator | 2025-08-29 17:32:36.933219 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-08-29 17:32:36.933230 | orchestrator | 2025-08-29 17:32:36.933241 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 17:32:36.933251 | orchestrator | Friday 29 August 2025 17:32:01 +0000 (0:00:02.820) 0:03:06.891 ********* 2025-08-29 17:32:36.933262 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.933273 | orchestrator | 2025-08-29 17:32:36.933283 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 17:32:36.933294 | orchestrator | Friday 29 August 2025 17:32:18 +0000 (0:00:17.907) 0:03:24.798 ********* 2025-08-29 17:32:36.933305 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:36.933315 | orchestrator | 2025-08-29 17:32:36.933326 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 17:32:36.933337 | orchestrator | Friday 29 August 2025 17:32:19 +0000 (0:00:00.558) 0:03:25.357 ********* 2025-08-29 17:32:36.933347 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:36.933410 | orchestrator | 2025-08-29 17:32:36.933422 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-08-29 17:32:36.933433 | orchestrator | 2025-08-29 17:32:36.933443 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-08-29 17:32:36.933454 | orchestrator | Friday 29 August 2025 17:32:22 +0000 (0:00:03.024) 0:03:28.381 ********* 2025-08-29 17:32:36.933465 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:32:36.933475 | orchestrator | 2025-08-29 17:32:36.933486 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-08-29 17:32:36.933497 | orchestrator | Friday 29 August 2025 17:32:23 +0000 (0:00:00.608) 0:03:28.990 ********* 2025-08-29 17:32:36.933507 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.933518 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.933529 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.933539 | orchestrator | 2025-08-29 17:32:36.933550 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-08-29 17:32:36.933561 | orchestrator | Friday 29 August 2025 17:32:25 +0000 (0:00:02.277) 0:03:31.267 ********* 2025-08-29 17:32:36.933571 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.933582 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.933592 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.933603 | orchestrator | 2025-08-29 17:32:36.933614 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-08-29 17:32:36.933624 | orchestrator | Friday 29 August 2025 17:32:27 +0000 (0:00:02.395) 0:03:33.662 ********* 2025-08-29 17:32:36.933642 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.933652 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.933663 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.933674 | orchestrator | 2025-08-29 17:32:36.933684 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-08-29 17:32:36.933695 | orchestrator | Friday 29 August 2025 17:32:29 +0000 (0:00:02.178) 0:03:35.841 ********* 2025-08-29 17:32:36.933706 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.933717 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.933727 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:36.933738 | orchestrator | 2025-08-29 17:32:36.933749 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-08-29 17:32:36.933767 | orchestrator | Friday 29 August 2025 17:32:32 +0000 (0:00:02.090) 0:03:37.932 ********* 2025-08-29 17:32:36.933779 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:36.933790 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:36.933800 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:36.933811 | orchestrator | 2025-08-29 17:32:36.933822 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-08-29 17:32:36.933833 | orchestrator | Friday 29 August 2025 17:32:35 +0000 (0:00:03.420) 0:03:41.353 ********* 2025-08-29 17:32:36.933843 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:36.933853 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:36.933863 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:36.933872 | orchestrator | 2025-08-29 17:32:36.933882 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:32:36.933891 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-08-29 17:32:36.933906 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-08-29 17:32:36.933916 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-08-29 17:32:36.933926 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-08-29 17:32:36.933936 | orchestrator | 2025-08-29 17:32:36.933945 | orchestrator | 2025-08-29 17:32:36.933955 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:32:36.933964 | orchestrator | Friday 29 August 2025 17:32:35 +0000 (0:00:00.487) 0:03:41.840 ********* 2025-08-29 17:32:36.933974 | orchestrator | =============================================================================== 2025-08-29 17:32:36.933984 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 48.56s 2025-08-29 17:32:36.933993 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 37.16s 2025-08-29 17:32:36.934002 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 21.83s 2025-08-29 17:32:36.934012 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.91s 2025-08-29 17:32:36.934047 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.02s 2025-08-29 17:32:36.934057 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.80s 2025-08-29 17:32:36.934067 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 6.31s 2025-08-29 17:32:36.934076 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.31s 2025-08-29 17:32:36.934086 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.59s 2025-08-29 17:32:36.934095 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.45s 2025-08-29 17:32:36.934105 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.34s 2025-08-29 17:32:36.934114 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.10s 2025-08-29 17:32:36.934130 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.69s 2025-08-29 17:32:36.934139 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.42s 2025-08-29 17:32:36.934149 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.34s 2025-08-29 17:32:36.934158 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.02s 2025-08-29 17:32:36.934168 | orchestrator | Check MariaDB service --------------------------------------------------- 2.95s 2025-08-29 17:32:36.934178 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.76s 2025-08-29 17:32:36.934187 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.40s 2025-08-29 17:32:36.934197 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.28s 2025-08-29 17:32:36.934206 | orchestrator | 2025-08-29 17:32:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:39.966426 | orchestrator | 2025-08-29 17:32:39 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:39.969086 | orchestrator | 2025-08-29 17:32:39 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:32:39.970175 | orchestrator | 2025-08-29 17:32:39 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:32:39.970521 | orchestrator | 2025-08-29 17:32:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:43.015755 | orchestrator | 2025-08-29 17:32:43 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:43.016414 | orchestrator | 2025-08-29 17:32:43 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:32:43.017208 | orchestrator | 2025-08-29 17:32:43 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:32:43.017312 | orchestrator | 2025-08-29 17:32:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:46.066634 | orchestrator | 2025-08-29 17:32:46 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:46.067658 | orchestrator | 2025-08-29 17:32:46 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:32:46.068872 | orchestrator | 2025-08-29 17:32:46 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:32:46.068896 | orchestrator | 2025-08-29 17:32:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:49.111650 | orchestrator | 2025-08-29 17:32:49 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:49.112871 | orchestrator | 2025-08-29 17:32:49 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:32:49.115941 | orchestrator | 2025-08-29 17:32:49 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:32:49.115985 | orchestrator | 2025-08-29 17:32:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:52.157975 | orchestrator | 2025-08-29 17:32:52 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:52.158427 | orchestrator | 2025-08-29 17:32:52 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:32:52.158471 | orchestrator | 2025-08-29 17:32:52 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:32:52.158484 | orchestrator | 2025-08-29 17:32:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:55.205220 | orchestrator | 2025-08-29 17:32:55 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:55.206751 | orchestrator | 2025-08-29 17:32:55 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:32:55.208794 | orchestrator | 2025-08-29 17:32:55 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:32:55.208829 | orchestrator | 2025-08-29 17:32:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:32:58.265426 | orchestrator | 2025-08-29 17:32:58 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:32:58.265673 | orchestrator | 2025-08-29 17:32:58 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:32:58.268405 | orchestrator | 2025-08-29 17:32:58 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:32:58.268440 | orchestrator | 2025-08-29 17:32:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:01.310919 | orchestrator | 2025-08-29 17:33:01 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state STARTED 2025-08-29 17:33:01.311029 | orchestrator | 2025-08-29 17:33:01 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:01.311623 | orchestrator | 2025-08-29 17:33:01 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:01.311648 | orchestrator | 2025-08-29 17:33:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:04.356561 | orchestrator | 2025-08-29 17:33:04 | INFO  | Task dbf9536b-4a8e-45e2-82bf-b4dd083cefa1 is in state SUCCESS 2025-08-29 17:33:04.357688 | orchestrator | 2025-08-29 17:33:04.357780 | orchestrator | 2025-08-29 17:33:04.357796 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-08-29 17:33:04.357901 | orchestrator | 2025-08-29 17:33:04.357915 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-08-29 17:33:04.357927 | orchestrator | Friday 29 August 2025 17:30:49 +0000 (0:00:00.703) 0:00:00.703 ********* 2025-08-29 17:33:04.357939 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:33:04.357951 | orchestrator | 2025-08-29 17:33:04.357962 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-08-29 17:33:04.357973 | orchestrator | Friday 29 August 2025 17:30:50 +0000 (0:00:00.723) 0:00:01.426 ********* 2025-08-29 17:33:04.357984 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:04.357996 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:04.358007 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:04.358885 | orchestrator | 2025-08-29 17:33:04.358917 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-08-29 17:33:04.358929 | orchestrator | Friday 29 August 2025 17:30:51 +0000 (0:00:00.647) 0:00:02.074 ********* 2025-08-29 17:33:04.358940 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:04.358951 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:04.358962 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:04.358973 | orchestrator | 2025-08-29 17:33:04.358984 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-08-29 17:33:04.358995 | orchestrator | Friday 29 August 2025 17:30:51 +0000 (0:00:00.340) 0:00:02.415 ********* 2025-08-29 17:33:04.359006 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:04.359017 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:04.359028 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:04.359038 | orchestrator | 2025-08-29 17:33:04.359050 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-08-29 17:33:04.359061 | orchestrator | Friday 29 August 2025 17:30:52 +0000 (0:00:00.808) 0:00:03.223 ********* 2025-08-29 17:33:04.359071 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:04.359082 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:04.359093 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:04.359104 | orchestrator | 2025-08-29 17:33:04.359115 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-08-29 17:33:04.359152 | orchestrator | Friday 29 August 2025 17:30:52 +0000 (0:00:00.329) 0:00:03.553 ********* 2025-08-29 17:33:04.359163 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:04.359174 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:04.359184 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:04.359195 | orchestrator | 2025-08-29 17:33:04.359205 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-08-29 17:33:04.359216 | orchestrator | Friday 29 August 2025 17:30:52 +0000 (0:00:00.312) 0:00:03.865 ********* 2025-08-29 17:33:04.359227 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:04.359263 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:04.359275 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:04.359285 | orchestrator | 2025-08-29 17:33:04.359297 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-08-29 17:33:04.359322 | orchestrator | Friday 29 August 2025 17:30:53 +0000 (0:00:00.351) 0:00:04.217 ********* 2025-08-29 17:33:04.359333 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.359345 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.359386 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.359399 | orchestrator | 2025-08-29 17:33:04.359410 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-08-29 17:33:04.359420 | orchestrator | Friday 29 August 2025 17:30:53 +0000 (0:00:00.518) 0:00:04.735 ********* 2025-08-29 17:33:04.359431 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:04.359442 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:04.359453 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:04.359463 | orchestrator | 2025-08-29 17:33:04.359476 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-08-29 17:33:04.359488 | orchestrator | Friday 29 August 2025 17:30:54 +0000 (0:00:00.342) 0:00:05.077 ********* 2025-08-29 17:33:04.359500 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 17:33:04.359512 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 17:33:04.359525 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 17:33:04.359537 | orchestrator | 2025-08-29 17:33:04.359549 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-08-29 17:33:04.359562 | orchestrator | Friday 29 August 2025 17:30:54 +0000 (0:00:00.677) 0:00:05.755 ********* 2025-08-29 17:33:04.359574 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:04.359586 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:04.359597 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:04.359609 | orchestrator | 2025-08-29 17:33:04.359621 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-08-29 17:33:04.359634 | orchestrator | Friday 29 August 2025 17:30:55 +0000 (0:00:00.449) 0:00:06.204 ********* 2025-08-29 17:33:04.359646 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 17:33:04.359658 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 17:33:04.359670 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 17:33:04.359682 | orchestrator | 2025-08-29 17:33:04.359695 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-08-29 17:33:04.359706 | orchestrator | Friday 29 August 2025 17:30:57 +0000 (0:00:02.218) 0:00:08.423 ********* 2025-08-29 17:33:04.359719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 17:33:04.359732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 17:33:04.359744 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 17:33:04.359756 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.359768 | orchestrator | 2025-08-29 17:33:04.359781 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-08-29 17:33:04.359848 | orchestrator | Friday 29 August 2025 17:30:57 +0000 (0:00:00.397) 0:00:08.821 ********* 2025-08-29 17:33:04.359884 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.359899 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.359910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.359921 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.359932 | orchestrator | 2025-08-29 17:33:04.359943 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-08-29 17:33:04.359953 | orchestrator | Friday 29 August 2025 17:30:58 +0000 (0:00:00.880) 0:00:09.702 ********* 2025-08-29 17:33:04.360004 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.360020 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.360038 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.360050 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.360061 | orchestrator | 2025-08-29 17:33:04.360072 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-08-29 17:33:04.360083 | orchestrator | Friday 29 August 2025 17:30:58 +0000 (0:00:00.169) 0:00:09.872 ********* 2025-08-29 17:33:04.360096 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b11412a334d7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-08-29 17:30:55.913863', 'end': '2025-08-29 17:30:55.963339', 'delta': '0:00:00.049476', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b11412a334d7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-08-29 17:33:04.360111 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e5299c048a91', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-08-29 17:30:56.722804', 'end': '2025-08-29 17:30:56.751846', 'delta': '0:00:00.029042', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e5299c048a91'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-08-29 17:33:04.360169 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a0121547f0b2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-08-29 17:30:57.270580', 'end': '2025-08-29 17:30:57.315478', 'delta': '0:00:00.044898', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a0121547f0b2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-08-29 17:33:04.360183 | orchestrator | 2025-08-29 17:33:04.360194 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-08-29 17:33:04.360205 | orchestrator | Friday 29 August 2025 17:30:59 +0000 (0:00:00.403) 0:00:10.275 ********* 2025-08-29 17:33:04.360216 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:04.360226 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:04.360237 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:04.360248 | orchestrator | 2025-08-29 17:33:04.360259 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-08-29 17:33:04.360269 | orchestrator | Friday 29 August 2025 17:30:59 +0000 (0:00:00.476) 0:00:10.751 ********* 2025-08-29 17:33:04.360280 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-08-29 17:33:04.360290 | orchestrator | 2025-08-29 17:33:04.360301 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-08-29 17:33:04.360312 | orchestrator | Friday 29 August 2025 17:31:01 +0000 (0:00:01.753) 0:00:12.504 ********* 2025-08-29 17:33:04.360322 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.360333 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.360344 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.360410 | orchestrator | 2025-08-29 17:33:04.360424 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-08-29 17:33:04.360435 | orchestrator | Friday 29 August 2025 17:31:01 +0000 (0:00:00.339) 0:00:12.844 ********* 2025-08-29 17:33:04.360445 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.360456 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.360467 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.360477 | orchestrator | 2025-08-29 17:33:04.360488 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 17:33:04.360499 | orchestrator | Friday 29 August 2025 17:31:02 +0000 (0:00:00.447) 0:00:13.291 ********* 2025-08-29 17:33:04.360509 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.360521 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.360532 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.360542 | orchestrator | 2025-08-29 17:33:04.360553 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-08-29 17:33:04.360570 | orchestrator | Friday 29 August 2025 17:31:02 +0000 (0:00:00.536) 0:00:13.828 ********* 2025-08-29 17:33:04.360581 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:04.360592 | orchestrator | 2025-08-29 17:33:04.360603 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-08-29 17:33:04.360614 | orchestrator | Friday 29 August 2025 17:31:03 +0000 (0:00:00.161) 0:00:13.990 ********* 2025-08-29 17:33:04.360624 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.360635 | orchestrator | 2025-08-29 17:33:04.360646 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 17:33:04.360657 | orchestrator | Friday 29 August 2025 17:31:03 +0000 (0:00:00.263) 0:00:14.254 ********* 2025-08-29 17:33:04.360675 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.360686 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.360697 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.360707 | orchestrator | 2025-08-29 17:33:04.360718 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-08-29 17:33:04.360729 | orchestrator | Friday 29 August 2025 17:31:03 +0000 (0:00:00.312) 0:00:14.566 ********* 2025-08-29 17:33:04.360740 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.360750 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.360761 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.360772 | orchestrator | 2025-08-29 17:33:04.360783 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-08-29 17:33:04.360794 | orchestrator | Friday 29 August 2025 17:31:04 +0000 (0:00:00.472) 0:00:15.039 ********* 2025-08-29 17:33:04.360804 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.360815 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.360826 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.360836 | orchestrator | 2025-08-29 17:33:04.360847 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-08-29 17:33:04.360858 | orchestrator | Friday 29 August 2025 17:31:04 +0000 (0:00:00.794) 0:00:15.834 ********* 2025-08-29 17:33:04.360869 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.360880 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.360891 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.360901 | orchestrator | 2025-08-29 17:33:04.360912 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-08-29 17:33:04.360923 | orchestrator | Friday 29 August 2025 17:31:05 +0000 (0:00:00.446) 0:00:16.280 ********* 2025-08-29 17:33:04.360934 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.360944 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.360955 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.360966 | orchestrator | 2025-08-29 17:33:04.360976 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-08-29 17:33:04.360987 | orchestrator | Friday 29 August 2025 17:31:05 +0000 (0:00:00.401) 0:00:16.682 ********* 2025-08-29 17:33:04.360998 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.361009 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.361020 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.361030 | orchestrator | 2025-08-29 17:33:04.361041 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-08-29 17:33:04.361088 | orchestrator | Friday 29 August 2025 17:31:06 +0000 (0:00:00.448) 0:00:17.131 ********* 2025-08-29 17:33:04.361101 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.361112 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.361122 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.361133 | orchestrator | 2025-08-29 17:33:04.361144 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-08-29 17:33:04.361155 | orchestrator | Friday 29 August 2025 17:31:06 +0000 (0:00:00.599) 0:00:17.730 ********* 2025-08-29 17:33:04.361167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b00dade2--f82b--53af--89a3--8c9250354ec6-osd--block--b00dade2--f82b--53af--89a3--8c9250354ec6', 'dm-uuid-LVM-VY8oO4jmapOTYN6w4zG3PV2M2NyDnMLhL8j5KdTyG1xSlCfRvP3XmgHoqydm0inH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8088253a--7e26--529d--8fdb--0f472c9bb5d3-osd--block--8088253a--7e26--529d--8fdb--0f472c9bb5d3', 'dm-uuid-LVM-FVeBMUPoAV9RcTz0ycqRnP1EtAKr6OFuAEc2nlv4hoplmaxvoQG2BgcNFvR0LK8g'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7cc16d54--75e9--5c21--b21a--878ce6efb3d6-osd--block--7cc16d54--75e9--5c21--b21a--878ce6efb3d6', 'dm-uuid-LVM-mGBVEzB4gcKM39Xx0aLE22bZ3zyymiRPX0QySadYR5ZdqE0ySp3sIrwpHhhyJTyJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53dd44b5--7849--5101--9e2a--fd90ac927c8f-osd--block--53dd44b5--7849--5101--9e2a--fd90ac927c8f', 'dm-uuid-LVM-tZ5rsxduFMnqzvdTHnqQLWccJnZQYchZIAFT1e5WnYYXln1r877aX72JY4ISW52H'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361337 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part1', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part14', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part15', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part16', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:33:04.361523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a4c19265--6381--5c6d--bd77--cfabc91aafa2-osd--block--a4c19265--6381--5c6d--bd77--cfabc91aafa2', 'dm-uuid-LVM-QSwdfYrrHmm7V51x7PzoPflTmwgQDmNf0ILcNWcv6jcDDetm7KKU0VlRyvTcFcbc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b00dade2--f82b--53af--89a3--8c9250354ec6-osd--block--b00dade2--f82b--53af--89a3--8c9250354ec6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Dkm75k-dgyQ-fHCc-DskW-R3kq-Avrn-VOvlhd', 'scsi-0QEMU_QEMU_HARDDISK_82706033-d7aa-4ff6-a1c3-b9e917369a8a', 'scsi-SQEMU_QEMU_HARDDISK_82706033-d7aa-4ff6-a1c3-b9e917369a8a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:33:04.361592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b12c38cd--5c6b--5ee1--93c6--dbb5afb60591-osd--block--b12c38cd--5c6b--5ee1--93c6--dbb5afb60591', 'dm-uuid-LVM-i9XIynlVD4XQHui1DTZfZNm2dtjd80d66kxMgcfPpxXv56ULLy5Z5x7FNHv6aseG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8088253a--7e26--529d--8fdb--0f472c9bb5d3-osd--block--8088253a--7e26--529d--8fdb--0f472c9bb5d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fo9JB4-Jkhr-Vfiv-ZF1s-HkrB-0RZo-O0eTPw', 'scsi-0QEMU_QEMU_HARDDISK_2fca4086-decf-4125-8890-d41999d174b7', 'scsi-SQEMU_QEMU_HARDDISK_2fca4086-decf-4125-8890-d41999d174b7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:33:04.361640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part1', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part14', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part15', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part16', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:33:04.361692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95e03a76-aedc-4db5-a6e6-17eb5f40fbcd', 'scsi-SQEMU_QEMU_HARDDISK_95e03a76-aedc-4db5-a6e6-17eb5f40fbcd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:33:04.361708 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7cc16d54--75e9--5c21--b21a--878ce6efb3d6-osd--block--7cc16d54--75e9--5c21--b21a--878ce6efb3d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yYjIpl-QKq5-SuzB-H6iQ-dv9d-fL3r-4wbiAB', 'scsi-0QEMU_QEMU_HARDDISK_fde2b913-b6a3-4677-89e6-f2f4f6c968dc', 'scsi-SQEMU_QEMU_HARDDISK_fde2b913-b6a3-4677-89e6-f2f4f6c968dc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:33:04.361732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:33:04.361764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--53dd44b5--7849--5101--9e2a--fd90ac927c8f-osd--block--53dd44b5--7849--5101--9e2a--fd90ac927c8f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PG2lDJ-9inP-j75c-Ibws-o2zw-l7L3-HOqOBz', 'scsi-0QEMU_QEMU_HARDDISK_d9d2a167-7162-4dbb-9ae3-4acc9c24be60', 'scsi-SQEMU_QEMU_HARDDISK_d9d2a167-7162-4dbb-9ae3-4acc9c24be60'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:33:04.361798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361809 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.361820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_822f3ffb-f416-492b-baa5-9c709b0e03df', 'scsi-SQEMU_QEMU_HARDDISK_822f3ffb-f416-492b-baa5-9c709b0e03df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:33:04.361837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:33:04.361860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:33:04.361871 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.361891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part1', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part14', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part15', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part16', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:33:04.361917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a4c19265--6381--5c6d--bd77--cfabc91aafa2-osd--block--a4c19265--6381--5c6d--bd77--cfabc91aafa2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TQ67LM-0DeH-KRgC-shpu-YHah-KL2O-nFSj7t', 'scsi-0QEMU_QEMU_HARDDISK_4c7a21bb-a31e-489d-9d8e-9d9677eceddb', 'scsi-SQEMU_QEMU_HARDDISK_4c7a21bb-a31e-489d-9d8e-9d9677eceddb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:33:04.361930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b12c38cd--5c6b--5ee1--93c6--dbb5afb60591-osd--block--b12c38cd--5c6b--5ee1--93c6--dbb5afb60591'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-otIJoe-qyBJ-4yKo-skrf-cWsC-MqoW-XYudVU', 'scsi-0QEMU_QEMU_HARDDISK_a2504ec8-b4e2-4484-b80f-e6a3be658c85', 'scsi-SQEMU_QEMU_HARDDISK_a2504ec8-b4e2-4484-b80f-e6a3be658c85'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:33:04.361941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f77776fc-d93a-4a3c-99b5-8bf2997ddc70', 'scsi-SQEMU_QEMU_HARDDISK_f77776fc-d93a-4a3c-99b5-8bf2997ddc70'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:33:04.361958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:33:04.361978 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.361990 | orchestrator | 2025-08-29 17:33:04.362001 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-08-29 17:33:04.362013 | orchestrator | Friday 29 August 2025 17:31:07 +0000 (0:00:00.617) 0:00:18.347 ********* 2025-08-29 17:33:04.362060 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b00dade2--f82b--53af--89a3--8c9250354ec6-osd--block--b00dade2--f82b--53af--89a3--8c9250354ec6', 'dm-uuid-LVM-VY8oO4jmapOTYN6w4zG3PV2M2NyDnMLhL8j5KdTyG1xSlCfRvP3XmgHoqydm0inH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362072 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8088253a--7e26--529d--8fdb--0f472c9bb5d3-osd--block--8088253a--7e26--529d--8fdb--0f472c9bb5d3', 'dm-uuid-LVM-FVeBMUPoAV9RcTz0ycqRnP1EtAKr6OFuAEc2nlv4hoplmaxvoQG2BgcNFvR0LK8g'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362090 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362103 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362115 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362141 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362153 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362165 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362182 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362195 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7cc16d54--75e9--5c21--b21a--878ce6efb3d6-osd--block--7cc16d54--75e9--5c21--b21a--878ce6efb3d6', 'dm-uuid-LVM-mGBVEzB4gcKM39Xx0aLE22bZ3zyymiRPX0QySadYR5ZdqE0ySp3sIrwpHhhyJTyJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362249 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53dd44b5--7849--5101--9e2a--fd90ac927c8f-osd--block--53dd44b5--7849--5101--9e2a--fd90ac927c8f', 'dm-uuid-LVM-tZ5rsxduFMnqzvdTHnqQLWccJnZQYchZIAFT1e5WnYYXln1r877aX72JY4ISW52H'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362269 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part1', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part14', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part15', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part16', 'scsi-SQEMU_QEMU_HARDDISK_010978aa-1cb4-4e02-9662-44bb4ec64585-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362283 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362295 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b00dade2--f82b--53af--89a3--8c9250354ec6-osd--block--b00dade2--f82b--53af--89a3--8c9250354ec6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Dkm75k-dgyQ-fHCc-DskW-R3kq-Avrn-VOvlhd', 'scsi-0QEMU_QEMU_HARDDISK_82706033-d7aa-4ff6-a1c3-b9e917369a8a', 'scsi-SQEMU_QEMU_HARDDISK_82706033-d7aa-4ff6-a1c3-b9e917369a8a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362337 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8088253a--7e26--529d--8fdb--0f472c9bb5d3-osd--block--8088253a--7e26--529d--8fdb--0f472c9bb5d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fo9JB4-Jkhr-Vfiv-ZF1s-HkrB-0RZo-O0eTPw', 'scsi-0QEMU_QEMU_HARDDISK_2fca4086-decf-4125-8890-d41999d174b7', 'scsi-SQEMU_QEMU_HARDDISK_2fca4086-decf-4125-8890-d41999d174b7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362349 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95e03a76-aedc-4db5-a6e6-17eb5f40fbcd', 'scsi-SQEMU_QEMU_HARDDISK_95e03a76-aedc-4db5-a6e6-17eb5f40fbcd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362395 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362407 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362432 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362444 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362455 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362472 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362483 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362495 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.362515 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part1', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part14', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part15', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part16', 'scsi-SQEMU_QEMU_HARDDISK_a9f3ec05-5588-4818-9e9a-1cb35db5ebc3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362535 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7cc16d54--75e9--5c21--b21a--878ce6efb3d6-osd--block--7cc16d54--75e9--5c21--b21a--878ce6efb3d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yYjIpl-QKq5-SuzB-H6iQ-dv9d-fL3r-4wbiAB', 'scsi-0QEMU_QEMU_HARDDISK_fde2b913-b6a3-4677-89e6-f2f4f6c968dc', 'scsi-SQEMU_QEMU_HARDDISK_fde2b913-b6a3-4677-89e6-f2f4f6c968dc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362555 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--53dd44b5--7849--5101--9e2a--fd90ac927c8f-osd--block--53dd44b5--7849--5101--9e2a--fd90ac927c8f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PG2lDJ-9inP-j75c-Ibws-o2zw-l7L3-HOqOBz', 'scsi-0QEMU_QEMU_HARDDISK_d9d2a167-7162-4dbb-9ae3-4acc9c24be60', 'scsi-SQEMU_QEMU_HARDDISK_d9d2a167-7162-4dbb-9ae3-4acc9c24be60'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362567 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_822f3ffb-f416-492b-baa5-9c709b0e03df', 'scsi-SQEMU_QEMU_HARDDISK_822f3ffb-f416-492b-baa5-9c709b0e03df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362591 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362603 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.362615 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a4c19265--6381--5c6d--bd77--cfabc91aafa2-osd--block--a4c19265--6381--5c6d--bd77--cfabc91aafa2', 'dm-uuid-LVM-QSwdfYrrHmm7V51x7PzoPflTmwgQDmNf0ILcNWcv6jcDDetm7KKU0VlRyvTcFcbc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362626 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b12c38cd--5c6b--5ee1--93c6--dbb5afb60591-osd--block--b12c38cd--5c6b--5ee1--93c6--dbb5afb60591', 'dm-uuid-LVM-i9XIynlVD4XQHui1DTZfZNm2dtjd80d66kxMgcfPpxXv56ULLy5Z5x7FNHv6aseG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362643 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362655 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362672 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362689 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362701 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362713 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362724 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362735 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362760 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part1', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part14', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part15', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part16', 'scsi-SQEMU_QEMU_HARDDISK_201936ad-eacc-472c-a085-9d8dab0e0a92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362809 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a4c19265--6381--5c6d--bd77--cfabc91aafa2-osd--block--a4c19265--6381--5c6d--bd77--cfabc91aafa2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TQ67LM-0DeH-KRgC-shpu-YHah-KL2O-nFSj7t', 'scsi-0QEMU_QEMU_HARDDISK_4c7a21bb-a31e-489d-9d8e-9d9677eceddb', 'scsi-SQEMU_QEMU_HARDDISK_4c7a21bb-a31e-489d-9d8e-9d9677eceddb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362913 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b12c38cd--5c6b--5ee1--93c6--dbb5afb60591-osd--block--b12c38cd--5c6b--5ee1--93c6--dbb5afb60591'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-otIJoe-qyBJ-4yKo-skrf-cWsC-MqoW-XYudVU', 'scsi-0QEMU_QEMU_HARDDISK_a2504ec8-b4e2-4484-b80f-e6a3be658c85', 'scsi-SQEMU_QEMU_HARDDISK_a2504ec8-b4e2-4484-b80f-e6a3be658c85'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362936 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f77776fc-d93a-4a3c-99b5-8bf2997ddc70', 'scsi-SQEMU_QEMU_HARDDISK_f77776fc-d93a-4a3c-99b5-8bf2997ddc70'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362955 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-16-37-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:33:04.362967 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.362978 | orchestrator | 2025-08-29 17:33:04.362989 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-08-29 17:33:04.363000 | orchestrator | Friday 29 August 2025 17:31:08 +0000 (0:00:00.840) 0:00:19.188 ********* 2025-08-29 17:33:04.363011 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:04.363022 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:04.363033 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:04.363044 | orchestrator | 2025-08-29 17:33:04.363055 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-08-29 17:33:04.363066 | orchestrator | Friday 29 August 2025 17:31:09 +0000 (0:00:00.827) 0:00:20.016 ********* 2025-08-29 17:33:04.363076 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:04.363087 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:04.363098 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:04.363108 | orchestrator | 2025-08-29 17:33:04.363119 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 17:33:04.363130 | orchestrator | Friday 29 August 2025 17:31:09 +0000 (0:00:00.748) 0:00:20.765 ********* 2025-08-29 17:33:04.363141 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:04.363152 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:04.363163 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:04.363174 | orchestrator | 2025-08-29 17:33:04.363185 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 17:33:04.363195 | orchestrator | Friday 29 August 2025 17:31:10 +0000 (0:00:00.756) 0:00:21.521 ********* 2025-08-29 17:33:04.363206 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.363217 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.363228 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.363238 | orchestrator | 2025-08-29 17:33:04.363249 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 17:33:04.363260 | orchestrator | Friday 29 August 2025 17:31:10 +0000 (0:00:00.327) 0:00:21.849 ********* 2025-08-29 17:33:04.363287 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.363298 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.363471 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.363487 | orchestrator | 2025-08-29 17:33:04.363498 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 17:33:04.363509 | orchestrator | Friday 29 August 2025 17:31:11 +0000 (0:00:00.428) 0:00:22.277 ********* 2025-08-29 17:33:04.363520 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.363531 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.363541 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.363552 | orchestrator | 2025-08-29 17:33:04.363562 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-08-29 17:33:04.363580 | orchestrator | Friday 29 August 2025 17:31:11 +0000 (0:00:00.578) 0:00:22.855 ********* 2025-08-29 17:33:04.363591 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-08-29 17:33:04.363602 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-08-29 17:33:04.363613 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-08-29 17:33:04.363623 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-08-29 17:33:04.363634 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-08-29 17:33:04.363645 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-08-29 17:33:04.363656 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-08-29 17:33:04.363666 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-08-29 17:33:04.363688 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-08-29 17:33:04.363700 | orchestrator | 2025-08-29 17:33:04.363711 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-08-29 17:33:04.363722 | orchestrator | Friday 29 August 2025 17:31:12 +0000 (0:00:00.963) 0:00:23.819 ********* 2025-08-29 17:33:04.363733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 17:33:04.363744 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 17:33:04.363754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 17:33:04.363775 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.363786 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 17:33:04.363796 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 17:33:04.363824 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 17:33:04.363834 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.363844 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 17:33:04.363853 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 17:33:04.363863 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 17:33:04.363872 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.363881 | orchestrator | 2025-08-29 17:33:04.363891 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-08-29 17:33:04.363901 | orchestrator | Friday 29 August 2025 17:31:13 +0000 (0:00:00.381) 0:00:24.200 ********* 2025-08-29 17:33:04.363910 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:33:04.363920 | orchestrator | 2025-08-29 17:33:04.363930 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 17:33:04.363941 | orchestrator | Friday 29 August 2025 17:31:14 +0000 (0:00:00.823) 0:00:25.024 ********* 2025-08-29 17:33:04.363950 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.363960 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.363969 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.363979 | orchestrator | 2025-08-29 17:33:04.363995 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 17:33:04.364005 | orchestrator | Friday 29 August 2025 17:31:14 +0000 (0:00:00.347) 0:00:25.372 ********* 2025-08-29 17:33:04.364014 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.364024 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.364041 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.364050 | orchestrator | 2025-08-29 17:33:04.364060 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 17:33:04.364070 | orchestrator | Friday 29 August 2025 17:31:14 +0000 (0:00:00.331) 0:00:25.703 ********* 2025-08-29 17:33:04.364079 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.364089 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.364098 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:04.364108 | orchestrator | 2025-08-29 17:33:04.364118 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 17:33:04.364127 | orchestrator | Friday 29 August 2025 17:31:15 +0000 (0:00:00.380) 0:00:26.084 ********* 2025-08-29 17:33:04.364137 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:04.364146 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:04.364156 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:04.364166 | orchestrator | 2025-08-29 17:33:04.364175 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 17:33:04.364185 | orchestrator | Friday 29 August 2025 17:31:15 +0000 (0:00:00.675) 0:00:26.760 ********* 2025-08-29 17:33:04.364194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:33:04.364204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:33:04.364214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:33:04.364223 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.364233 | orchestrator | 2025-08-29 17:33:04.364242 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 17:33:04.364252 | orchestrator | Friday 29 August 2025 17:31:16 +0000 (0:00:00.412) 0:00:27.172 ********* 2025-08-29 17:33:04.364261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:33:04.364271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:33:04.364280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:33:04.364290 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.364299 | orchestrator | 2025-08-29 17:33:04.364309 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 17:33:04.364318 | orchestrator | Friday 29 August 2025 17:31:16 +0000 (0:00:00.400) 0:00:27.573 ********* 2025-08-29 17:33:04.364328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:33:04.364337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:33:04.364347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:33:04.364375 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.364385 | orchestrator | 2025-08-29 17:33:04.364399 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 17:33:04.364409 | orchestrator | Friday 29 August 2025 17:31:17 +0000 (0:00:00.390) 0:00:27.963 ********* 2025-08-29 17:33:04.364419 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:04.364428 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:04.364438 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:04.364447 | orchestrator | 2025-08-29 17:33:04.364457 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 17:33:04.364467 | orchestrator | Friday 29 August 2025 17:31:17 +0000 (0:00:00.329) 0:00:28.292 ********* 2025-08-29 17:33:04.364476 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 17:33:04.364486 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 17:33:04.364495 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 17:33:04.364505 | orchestrator | 2025-08-29 17:33:04.364514 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-08-29 17:33:04.364524 | orchestrator | Friday 29 August 2025 17:31:17 +0000 (0:00:00.578) 0:00:28.871 ********* 2025-08-29 17:33:04.364534 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 17:33:04.364543 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 17:33:04.364559 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 17:33:04.364569 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 17:33:04.364579 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 17:33:04.364589 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 17:33:04.364598 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 17:33:04.364607 | orchestrator | 2025-08-29 17:33:04.364617 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-08-29 17:33:04.364627 | orchestrator | Friday 29 August 2025 17:31:18 +0000 (0:00:01.057) 0:00:29.929 ********* 2025-08-29 17:33:04.364636 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 17:33:04.364646 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 17:33:04.364655 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 17:33:04.364665 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 17:33:04.364675 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 17:33:04.364684 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 17:33:04.364694 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 17:33:04.364703 | orchestrator | 2025-08-29 17:33:04.364718 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-08-29 17:33:04.364728 | orchestrator | Friday 29 August 2025 17:31:21 +0000 (0:00:02.157) 0:00:32.086 ********* 2025-08-29 17:33:04.364737 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:04.364747 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:04.364756 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-08-29 17:33:04.364766 | orchestrator | 2025-08-29 17:33:04.364776 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-08-29 17:33:04.364785 | orchestrator | Friday 29 August 2025 17:31:21 +0000 (0:00:00.412) 0:00:32.499 ********* 2025-08-29 17:33:04.364796 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 17:33:04.364806 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 17:33:04.364817 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 17:33:04.364826 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 17:33:04.364836 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 17:33:04.364846 | orchestrator | 2025-08-29 17:33:04.364862 | orchestrator | TASK [generate keys] *********************************************************** 2025-08-29 17:33:04.364876 | orchestrator | Friday 29 August 2025 17:32:08 +0000 (0:00:46.983) 0:01:19.483 ********* 2025-08-29 17:33:04.364886 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.364896 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.364905 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.364915 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.364924 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.364934 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.364944 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-08-29 17:33:04.364953 | orchestrator | 2025-08-29 17:33:04.364963 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-08-29 17:33:04.364973 | orchestrator | Friday 29 August 2025 17:32:33 +0000 (0:00:24.994) 0:01:44.477 ********* 2025-08-29 17:33:04.364982 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.364992 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.365001 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.365011 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.365020 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.365030 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.365039 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 17:33:04.365049 | orchestrator | 2025-08-29 17:33:04.365059 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-08-29 17:33:04.365068 | orchestrator | Friday 29 August 2025 17:32:45 +0000 (0:00:12.327) 0:01:56.804 ********* 2025-08-29 17:33:04.365077 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.365087 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 17:33:04.365097 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 17:33:04.365106 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.365116 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 17:33:04.365126 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 17:33:04.365140 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.365150 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 17:33:04.365160 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 17:33:04.365169 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.365179 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 17:33:04.365188 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 17:33:04.365198 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.365207 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 17:33:04.365217 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 17:33:04.365226 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:33:04.365236 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 17:33:04.365255 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 17:33:04.365265 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-08-29 17:33:04.365274 | orchestrator | 2025-08-29 17:33:04.365284 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:33:04.365294 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-08-29 17:33:04.365305 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 17:33:04.365315 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 17:33:04.365325 | orchestrator | 2025-08-29 17:33:04.365335 | orchestrator | 2025-08-29 17:33:04.365344 | orchestrator | 2025-08-29 17:33:04.365369 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:33:04.365379 | orchestrator | Friday 29 August 2025 17:33:03 +0000 (0:00:17.187) 0:02:13.992 ********* 2025-08-29 17:33:04.365389 | orchestrator | =============================================================================== 2025-08-29 17:33:04.365398 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.98s 2025-08-29 17:33:04.365408 | orchestrator | generate keys ---------------------------------------------------------- 24.99s 2025-08-29 17:33:04.365422 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.19s 2025-08-29 17:33:04.365432 | orchestrator | get keys from monitors ------------------------------------------------- 12.33s 2025-08-29 17:33:04.365442 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.22s 2025-08-29 17:33:04.365451 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.16s 2025-08-29 17:33:04.365461 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.75s 2025-08-29 17:33:04.365470 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.06s 2025-08-29 17:33:04.365480 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.96s 2025-08-29 17:33:04.365489 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.88s 2025-08-29 17:33:04.365499 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.84s 2025-08-29 17:33:04.365509 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.83s 2025-08-29 17:33:04.365518 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.82s 2025-08-29 17:33:04.365528 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.81s 2025-08-29 17:33:04.365537 | orchestrator | ceph-facts : Set_fact build devices from resolved symlinks -------------- 0.79s 2025-08-29 17:33:04.365547 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.76s 2025-08-29 17:33:04.365556 | orchestrator | ceph-facts : Set default osd_pool_default_crush_rule fact --------------- 0.75s 2025-08-29 17:33:04.365566 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.72s 2025-08-29 17:33:04.365576 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.68s 2025-08-29 17:33:04.365585 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.68s 2025-08-29 17:33:04.365595 | orchestrator | 2025-08-29 17:33:04 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:04.365605 | orchestrator | 2025-08-29 17:33:04 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:04.365614 | orchestrator | 2025-08-29 17:33:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:07.401444 | orchestrator | 2025-08-29 17:33:07 | INFO  | Task ac7e628c-d6ce-4954-be30-62215bfbfc95 is in state STARTED 2025-08-29 17:33:07.401591 | orchestrator | 2025-08-29 17:33:07 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:07.402951 | orchestrator | 2025-08-29 17:33:07 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:07.402986 | orchestrator | 2025-08-29 17:33:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:10.438157 | orchestrator | 2025-08-29 17:33:10 | INFO  | Task ac7e628c-d6ce-4954-be30-62215bfbfc95 is in state STARTED 2025-08-29 17:33:10.438349 | orchestrator | 2025-08-29 17:33:10 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:10.439176 | orchestrator | 2025-08-29 17:33:10 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:10.441673 | orchestrator | 2025-08-29 17:33:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:13.478474 | orchestrator | 2025-08-29 17:33:13 | INFO  | Task ac7e628c-d6ce-4954-be30-62215bfbfc95 is in state STARTED 2025-08-29 17:33:13.479607 | orchestrator | 2025-08-29 17:33:13 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:13.481491 | orchestrator | 2025-08-29 17:33:13 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:13.481513 | orchestrator | 2025-08-29 17:33:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:16.512000 | orchestrator | 2025-08-29 17:33:16 | INFO  | Task ac7e628c-d6ce-4954-be30-62215bfbfc95 is in state STARTED 2025-08-29 17:33:16.517061 | orchestrator | 2025-08-29 17:33:16 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:16.519020 | orchestrator | 2025-08-29 17:33:16 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:16.519054 | orchestrator | 2025-08-29 17:33:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:19.563088 | orchestrator | 2025-08-29 17:33:19 | INFO  | Task ac7e628c-d6ce-4954-be30-62215bfbfc95 is in state STARTED 2025-08-29 17:33:19.565636 | orchestrator | 2025-08-29 17:33:19 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:19.565741 | orchestrator | 2025-08-29 17:33:19 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:19.565758 | orchestrator | 2025-08-29 17:33:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:22.605897 | orchestrator | 2025-08-29 17:33:22 | INFO  | Task ac7e628c-d6ce-4954-be30-62215bfbfc95 is in state STARTED 2025-08-29 17:33:22.606077 | orchestrator | 2025-08-29 17:33:22 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:22.609100 | orchestrator | 2025-08-29 17:33:22 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:22.609124 | orchestrator | 2025-08-29 17:33:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:25.651261 | orchestrator | 2025-08-29 17:33:25 | INFO  | Task ac7e628c-d6ce-4954-be30-62215bfbfc95 is in state STARTED 2025-08-29 17:33:25.652730 | orchestrator | 2025-08-29 17:33:25 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:25.657946 | orchestrator | 2025-08-29 17:33:25 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:25.657971 | orchestrator | 2025-08-29 17:33:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:28.713049 | orchestrator | 2025-08-29 17:33:28 | INFO  | Task ac7e628c-d6ce-4954-be30-62215bfbfc95 is in state STARTED 2025-08-29 17:33:28.715762 | orchestrator | 2025-08-29 17:33:28 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:28.719169 | orchestrator | 2025-08-29 17:33:28 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:28.719440 | orchestrator | 2025-08-29 17:33:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:31.753903 | orchestrator | 2025-08-29 17:33:31 | INFO  | Task ac7e628c-d6ce-4954-be30-62215bfbfc95 is in state STARTED 2025-08-29 17:33:31.754271 | orchestrator | 2025-08-29 17:33:31 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:31.754990 | orchestrator | 2025-08-29 17:33:31 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:31.755098 | orchestrator | 2025-08-29 17:33:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:34.798535 | orchestrator | 2025-08-29 17:33:34 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:33:34.800928 | orchestrator | 2025-08-29 17:33:34 | INFO  | Task ac7e628c-d6ce-4954-be30-62215bfbfc95 is in state SUCCESS 2025-08-29 17:33:34.802823 | orchestrator | 2025-08-29 17:33:34 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:34.803799 | orchestrator | 2025-08-29 17:33:34 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:34.804170 | orchestrator | 2025-08-29 17:33:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:37.854567 | orchestrator | 2025-08-29 17:33:37 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:33:37.857116 | orchestrator | 2025-08-29 17:33:37 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:37.860207 | orchestrator | 2025-08-29 17:33:37 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:37.860237 | orchestrator | 2025-08-29 17:33:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:40.908910 | orchestrator | 2025-08-29 17:33:40 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:33:40.911666 | orchestrator | 2025-08-29 17:33:40 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:40.913488 | orchestrator | 2025-08-29 17:33:40 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:40.913511 | orchestrator | 2025-08-29 17:33:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:43.964180 | orchestrator | 2025-08-29 17:33:43 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:33:43.964884 | orchestrator | 2025-08-29 17:33:43 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:43.965979 | orchestrator | 2025-08-29 17:33:43 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:43.966148 | orchestrator | 2025-08-29 17:33:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:47.008733 | orchestrator | 2025-08-29 17:33:47 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:33:47.009972 | orchestrator | 2025-08-29 17:33:47 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:47.011591 | orchestrator | 2025-08-29 17:33:47 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:47.011641 | orchestrator | 2025-08-29 17:33:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:50.046642 | orchestrator | 2025-08-29 17:33:50 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:33:50.048425 | orchestrator | 2025-08-29 17:33:50 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:50.049221 | orchestrator | 2025-08-29 17:33:50 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:50.049279 | orchestrator | 2025-08-29 17:33:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:53.086528 | orchestrator | 2025-08-29 17:33:53 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:33:53.088447 | orchestrator | 2025-08-29 17:33:53 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:53.090528 | orchestrator | 2025-08-29 17:33:53 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:53.091073 | orchestrator | 2025-08-29 17:33:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:56.143236 | orchestrator | 2025-08-29 17:33:56 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:33:56.157412 | orchestrator | 2025-08-29 17:33:56 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:56.165787 | orchestrator | 2025-08-29 17:33:56 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:56.165860 | orchestrator | 2025-08-29 17:33:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:33:59.207629 | orchestrator | 2025-08-29 17:33:59 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:33:59.208756 | orchestrator | 2025-08-29 17:33:59 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:33:59.210435 | orchestrator | 2025-08-29 17:33:59 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:33:59.210481 | orchestrator | 2025-08-29 17:33:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:02.244779 | orchestrator | 2025-08-29 17:34:02 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:34:02.248747 | orchestrator | 2025-08-29 17:34:02 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:34:02.251117 | orchestrator | 2025-08-29 17:34:02 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:02.251182 | orchestrator | 2025-08-29 17:34:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:05.295880 | orchestrator | 2025-08-29 17:34:05 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:34:05.297923 | orchestrator | 2025-08-29 17:34:05 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:34:05.300486 | orchestrator | 2025-08-29 17:34:05 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:05.300554 | orchestrator | 2025-08-29 17:34:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:08.342919 | orchestrator | 2025-08-29 17:34:08 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:34:08.343637 | orchestrator | 2025-08-29 17:34:08 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:34:08.344666 | orchestrator | 2025-08-29 17:34:08 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:08.344689 | orchestrator | 2025-08-29 17:34:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:11.392770 | orchestrator | 2025-08-29 17:34:11 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:34:11.395315 | orchestrator | 2025-08-29 17:34:11 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:34:11.397807 | orchestrator | 2025-08-29 17:34:11 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:11.397860 | orchestrator | 2025-08-29 17:34:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:14.438190 | orchestrator | 2025-08-29 17:34:14 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:34:14.439496 | orchestrator | 2025-08-29 17:34:14 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:34:14.441043 | orchestrator | 2025-08-29 17:34:14 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:14.441347 | orchestrator | 2025-08-29 17:34:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:17.486814 | orchestrator | 2025-08-29 17:34:17 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:34:17.489218 | orchestrator | 2025-08-29 17:34:17 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:34:17.491223 | orchestrator | 2025-08-29 17:34:17 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:17.491859 | orchestrator | 2025-08-29 17:34:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:20.540354 | orchestrator | 2025-08-29 17:34:20 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:34:20.541685 | orchestrator | 2025-08-29 17:34:20 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:34:20.543057 | orchestrator | 2025-08-29 17:34:20 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:20.543082 | orchestrator | 2025-08-29 17:34:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:23.591018 | orchestrator | 2025-08-29 17:34:23 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:34:23.591912 | orchestrator | 2025-08-29 17:34:23 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:34:23.593074 | orchestrator | 2025-08-29 17:34:23 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:23.593099 | orchestrator | 2025-08-29 17:34:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:26.637285 | orchestrator | 2025-08-29 17:34:26 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state STARTED 2025-08-29 17:34:26.637671 | orchestrator | 2025-08-29 17:34:26 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:34:26.639554 | orchestrator | 2025-08-29 17:34:26 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:26.639576 | orchestrator | 2025-08-29 17:34:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:29.694392 | orchestrator | 2025-08-29 17:34:29.694468 | orchestrator | 2025-08-29 17:34:29.694475 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-08-29 17:34:29.694480 | orchestrator | 2025-08-29 17:34:29.694484 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-08-29 17:34:29.694489 | orchestrator | Friday 29 August 2025 17:33:07 +0000 (0:00:00.191) 0:00:00.191 ********* 2025-08-29 17:34:29.694493 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-08-29 17:34:29.694499 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 17:34:29.694503 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 17:34:29.694507 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 17:34:29.694527 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 17:34:29.694532 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-08-29 17:34:29.694536 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-08-29 17:34:29.694539 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-08-29 17:34:29.694543 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-08-29 17:34:29.694547 | orchestrator | 2025-08-29 17:34:29.694551 | orchestrator | TASK [Create share directory] ************************************************** 2025-08-29 17:34:29.694555 | orchestrator | Friday 29 August 2025 17:33:12 +0000 (0:00:04.222) 0:00:04.413 ********* 2025-08-29 17:34:29.694559 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 17:34:29.694563 | orchestrator | 2025-08-29 17:34:29.694567 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-08-29 17:34:29.694571 | orchestrator | Friday 29 August 2025 17:33:13 +0000 (0:00:01.063) 0:00:05.476 ********* 2025-08-29 17:34:29.694575 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-08-29 17:34:29.694579 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 17:34:29.694583 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 17:34:29.694587 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 17:34:29.694591 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 17:34:29.694595 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-08-29 17:34:29.694598 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-08-29 17:34:29.694611 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-08-29 17:34:29.694615 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-08-29 17:34:29.694619 | orchestrator | 2025-08-29 17:34:29.694623 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-08-29 17:34:29.694627 | orchestrator | Friday 29 August 2025 17:33:25 +0000 (0:00:12.598) 0:00:18.074 ********* 2025-08-29 17:34:29.694631 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-08-29 17:34:29.694635 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 17:34:29.694639 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 17:34:29.694642 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 17:34:29.694646 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 17:34:29.694650 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-08-29 17:34:29.694653 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-08-29 17:34:29.694657 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-08-29 17:34:29.694661 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-08-29 17:34:29.694664 | orchestrator | 2025-08-29 17:34:29.694668 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:34:29.694672 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:34:29.694677 | orchestrator | 2025-08-29 17:34:29.694681 | orchestrator | 2025-08-29 17:34:29.694685 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:34:29.694689 | orchestrator | Friday 29 August 2025 17:33:32 +0000 (0:00:06.608) 0:00:24.683 ********* 2025-08-29 17:34:29.694697 | orchestrator | =============================================================================== 2025-08-29 17:34:29.694701 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.60s 2025-08-29 17:34:29.694705 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.61s 2025-08-29 17:34:29.694708 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.22s 2025-08-29 17:34:29.694712 | orchestrator | Create share directory -------------------------------------------------- 1.06s 2025-08-29 17:34:29.694716 | orchestrator | 2025-08-29 17:34:29.694719 | orchestrator | 2025-08-29 17:34:29.694723 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-08-29 17:34:29.694727 | orchestrator | 2025-08-29 17:34:29.694742 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-08-29 17:34:29.694746 | orchestrator | Friday 29 August 2025 17:33:36 +0000 (0:00:00.248) 0:00:00.248 ********* 2025-08-29 17:34:29.694751 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-08-29 17:34:29.694758 | orchestrator | 2025-08-29 17:34:29.694764 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-08-29 17:34:29.694771 | orchestrator | Friday 29 August 2025 17:33:36 +0000 (0:00:00.227) 0:00:00.476 ********* 2025-08-29 17:34:29.694776 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-08-29 17:34:29.694782 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-08-29 17:34:29.694788 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-08-29 17:34:29.694794 | orchestrator | 2025-08-29 17:34:29.694800 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-08-29 17:34:29.694806 | orchestrator | Friday 29 August 2025 17:33:38 +0000 (0:00:01.305) 0:00:01.781 ********* 2025-08-29 17:34:29.694810 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-08-29 17:34:29.694814 | orchestrator | 2025-08-29 17:34:29.694817 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-08-29 17:34:29.694821 | orchestrator | Friday 29 August 2025 17:33:39 +0000 (0:00:01.299) 0:00:03.081 ********* 2025-08-29 17:34:29.694825 | orchestrator | changed: [testbed-manager] 2025-08-29 17:34:29.694829 | orchestrator | 2025-08-29 17:34:29.694833 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-08-29 17:34:29.694837 | orchestrator | Friday 29 August 2025 17:33:40 +0000 (0:00:01.081) 0:00:04.163 ********* 2025-08-29 17:34:29.694840 | orchestrator | changed: [testbed-manager] 2025-08-29 17:34:29.694844 | orchestrator | 2025-08-29 17:34:29.694848 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-08-29 17:34:29.694851 | orchestrator | Friday 29 August 2025 17:33:41 +0000 (0:00:00.982) 0:00:05.145 ********* 2025-08-29 17:34:29.694855 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-08-29 17:34:29.694859 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:29.694863 | orchestrator | 2025-08-29 17:34:29.694867 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-08-29 17:34:29.694870 | orchestrator | Friday 29 August 2025 17:34:17 +0000 (0:00:36.333) 0:00:41.479 ********* 2025-08-29 17:34:29.694874 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-08-29 17:34:29.694878 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-08-29 17:34:29.694882 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-08-29 17:34:29.694886 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-08-29 17:34:29.694890 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-08-29 17:34:29.694893 | orchestrator | 2025-08-29 17:34:29.694897 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-08-29 17:34:29.694901 | orchestrator | Friday 29 August 2025 17:34:22 +0000 (0:00:04.292) 0:00:45.771 ********* 2025-08-29 17:34:29.694912 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-08-29 17:34:29.694916 | orchestrator | 2025-08-29 17:34:29.694919 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-08-29 17:34:29.694923 | orchestrator | Friday 29 August 2025 17:34:22 +0000 (0:00:00.510) 0:00:46.282 ********* 2025-08-29 17:34:29.694927 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:34:29.694930 | orchestrator | 2025-08-29 17:34:29.694934 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-08-29 17:34:29.694938 | orchestrator | Friday 29 August 2025 17:34:22 +0000 (0:00:00.140) 0:00:46.422 ********* 2025-08-29 17:34:29.694942 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:34:29.694945 | orchestrator | 2025-08-29 17:34:29.694949 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-08-29 17:34:29.694953 | orchestrator | Friday 29 August 2025 17:34:23 +0000 (0:00:00.321) 0:00:46.744 ********* 2025-08-29 17:34:29.694957 | orchestrator | changed: [testbed-manager] 2025-08-29 17:34:29.694960 | orchestrator | 2025-08-29 17:34:29.694964 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-08-29 17:34:29.694968 | orchestrator | Friday 29 August 2025 17:34:25 +0000 (0:00:01.981) 0:00:48.726 ********* 2025-08-29 17:34:29.694972 | orchestrator | changed: [testbed-manager] 2025-08-29 17:34:29.694975 | orchestrator | 2025-08-29 17:34:29.694979 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-08-29 17:34:29.694983 | orchestrator | Friday 29 August 2025 17:34:26 +0000 (0:00:00.903) 0:00:49.629 ********* 2025-08-29 17:34:29.694987 | orchestrator | changed: [testbed-manager] 2025-08-29 17:34:29.694990 | orchestrator | 2025-08-29 17:34:29.694994 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-08-29 17:34:29.694998 | orchestrator | Friday 29 August 2025 17:34:26 +0000 (0:00:00.684) 0:00:50.314 ********* 2025-08-29 17:34:29.695002 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-08-29 17:34:29.695007 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-08-29 17:34:29.695013 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-08-29 17:34:29.695019 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-08-29 17:34:29.695025 | orchestrator | 2025-08-29 17:34:29.695031 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:34:29.695037 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:34:29.695044 | orchestrator | 2025-08-29 17:34:29.695049 | orchestrator | 2025-08-29 17:34:29.695055 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:34:29.695061 | orchestrator | Friday 29 August 2025 17:34:28 +0000 (0:00:01.675) 0:00:51.989 ********* 2025-08-29 17:34:29.695071 | orchestrator | =============================================================================== 2025-08-29 17:34:29.695077 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.33s 2025-08-29 17:34:29.695083 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.29s 2025-08-29 17:34:29.695090 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.98s 2025-08-29 17:34:29.695095 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.68s 2025-08-29 17:34:29.695101 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.31s 2025-08-29 17:34:29.695107 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.30s 2025-08-29 17:34:29.695112 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.08s 2025-08-29 17:34:29.695117 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.98s 2025-08-29 17:34:29.695123 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.90s 2025-08-29 17:34:29.695129 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.68s 2025-08-29 17:34:29.695141 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.51s 2025-08-29 17:34:29.695148 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.32s 2025-08-29 17:34:29.695154 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-08-29 17:34:29.695160 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2025-08-29 17:34:29.695167 | orchestrator | 2025-08-29 17:34:29 | INFO  | Task e4b3c03e-3b32-42a2-bd1b-aa1fee6bd404 is in state SUCCESS 2025-08-29 17:34:29.695901 | orchestrator | 2025-08-29 17:34:29 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state STARTED 2025-08-29 17:34:29.697934 | orchestrator | 2025-08-29 17:34:29 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:29.697953 | orchestrator | 2025-08-29 17:34:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:32.738576 | orchestrator | 2025-08-29 17:34:32 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:34:32.740902 | orchestrator | 2025-08-29 17:34:32 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:34:32.742156 | orchestrator | 2025-08-29 17:34:32 | INFO  | Task d4cf479a-c74b-4edc-80d7-625bf007e37f is in state STARTED 2025-08-29 17:34:32.744495 | orchestrator | 2025-08-29 17:34:32 | INFO  | Task 4bd2720f-faa2-4eca-871e-f13af50b93f6 is in state SUCCESS 2025-08-29 17:34:32.747215 | orchestrator | 2025-08-29 17:34:32.747280 | orchestrator | 2025-08-29 17:34:32.747296 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:34:32.747310 | orchestrator | 2025-08-29 17:34:32.747480 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:34:32.747740 | orchestrator | Friday 29 August 2025 17:32:41 +0000 (0:00:00.291) 0:00:00.291 ********* 2025-08-29 17:34:32.747752 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:32.747761 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:32.747768 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:32.747775 | orchestrator | 2025-08-29 17:34:32.747783 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:34:32.747791 | orchestrator | Friday 29 August 2025 17:32:41 +0000 (0:00:00.297) 0:00:00.589 ********* 2025-08-29 17:34:32.747798 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-08-29 17:34:32.747806 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-08-29 17:34:32.747813 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-08-29 17:34:32.747820 | orchestrator | 2025-08-29 17:34:32.747828 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-08-29 17:34:32.747835 | orchestrator | 2025-08-29 17:34:32.747843 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 17:34:32.747850 | orchestrator | Friday 29 August 2025 17:32:41 +0000 (0:00:00.544) 0:00:01.134 ********* 2025-08-29 17:34:32.747858 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:34:32.747867 | orchestrator | 2025-08-29 17:34:32.747875 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-08-29 17:34:32.747882 | orchestrator | Friday 29 August 2025 17:32:42 +0000 (0:00:00.550) 0:00:01.685 ********* 2025-08-29 17:34:32.747894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 17:34:32.747949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 17:34:32.747960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 17:34:32.747974 | orchestrator | 2025-08-29 17:34:32.747982 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-08-29 17:34:32.747989 | orchestrator | Friday 29 August 2025 17:32:43 +0000 (0:00:01.218) 0:00:02.903 ********* 2025-08-29 17:34:32.747996 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:32.748004 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:32.748028 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:32.748035 | orchestrator | 2025-08-29 17:34:32.748042 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 17:34:32.748050 | orchestrator | Friday 29 August 2025 17:32:44 +0000 (0:00:00.506) 0:00:03.410 ********* 2025-08-29 17:34:32.748057 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 17:34:32.748071 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 17:34:32.748083 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 17:34:32.748091 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 17:34:32.748098 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 17:34:32.748105 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 17:34:32.748113 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-08-29 17:34:32.748120 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 17:34:32.748127 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 17:34:32.748135 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 17:34:32.748142 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 17:34:32.748149 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 17:34:32.748156 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 17:34:32.748163 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 17:34:32.748176 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-08-29 17:34:32.748183 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 17:34:32.748190 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 17:34:32.748198 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 17:34:32.748205 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 17:34:32.748212 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 17:34:32.748219 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 17:34:32.748226 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 17:34:32.748233 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-08-29 17:34:32.748240 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 17:34:32.748250 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-08-29 17:34:32.748263 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-08-29 17:34:32.748276 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-08-29 17:34:32.748287 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-08-29 17:34:32.748299 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-08-29 17:34:32.748311 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-08-29 17:34:32.748324 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-08-29 17:34:32.748337 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-08-29 17:34:32.748348 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-08-29 17:34:32.748363 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-08-29 17:34:32.748398 | orchestrator | 2025-08-29 17:34:32.748407 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 17:34:32.748415 | orchestrator | Friday 29 August 2025 17:32:44 +0000 (0:00:00.769) 0:00:04.180 ********* 2025-08-29 17:34:32.748423 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:32.748432 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:32.748440 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:32.748447 | orchestrator | 2025-08-29 17:34:32.748455 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 17:34:32.748464 | orchestrator | Friday 29 August 2025 17:32:45 +0000 (0:00:00.358) 0:00:04.538 ********* 2025-08-29 17:34:32.748472 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.748480 | orchestrator | 2025-08-29 17:34:32.748494 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 17:34:32.748507 | orchestrator | Friday 29 August 2025 17:32:45 +0000 (0:00:00.136) 0:00:04.675 ********* 2025-08-29 17:34:32.748522 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.748530 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:32.748538 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:32.748546 | orchestrator | 2025-08-29 17:34:32.748555 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 17:34:32.748562 | orchestrator | Friday 29 August 2025 17:32:45 +0000 (0:00:00.530) 0:00:05.205 ********* 2025-08-29 17:34:32.748571 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:32.748579 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:32.748587 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:32.748595 | orchestrator | 2025-08-29 17:34:32.748603 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 17:34:32.748611 | orchestrator | Friday 29 August 2025 17:32:46 +0000 (0:00:00.344) 0:00:05.550 ********* 2025-08-29 17:34:32.748619 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.748627 | orchestrator | 2025-08-29 17:34:32.748635 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 17:34:32.748643 | orchestrator | Friday 29 August 2025 17:32:46 +0000 (0:00:00.120) 0:00:05.671 ********* 2025-08-29 17:34:32.748651 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.748659 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:32.748667 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:32.748676 | orchestrator | 2025-08-29 17:34:32.748684 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 17:34:32.748692 | orchestrator | Friday 29 August 2025 17:32:46 +0000 (0:00:00.306) 0:00:05.978 ********* 2025-08-29 17:34:32.748700 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:32.748708 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:32.748716 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:32.748723 | orchestrator | 2025-08-29 17:34:32.748730 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 17:34:32.748738 | orchestrator | Friday 29 August 2025 17:32:47 +0000 (0:00:00.337) 0:00:06.316 ********* 2025-08-29 17:34:32.748745 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.748752 | orchestrator | 2025-08-29 17:34:32.748759 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 17:34:32.748766 | orchestrator | Friday 29 August 2025 17:32:47 +0000 (0:00:00.135) 0:00:06.452 ********* 2025-08-29 17:34:32.748774 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.748781 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:32.748788 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:32.748795 | orchestrator | 2025-08-29 17:34:32.748802 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 17:34:32.748810 | orchestrator | Friday 29 August 2025 17:32:47 +0000 (0:00:00.548) 0:00:07.000 ********* 2025-08-29 17:34:32.748817 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:32.748824 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:32.748831 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:32.748838 | orchestrator | 2025-08-29 17:34:32.748845 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 17:34:32.748853 | orchestrator | Friday 29 August 2025 17:32:48 +0000 (0:00:00.334) 0:00:07.335 ********* 2025-08-29 17:34:32.748860 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.748867 | orchestrator | 2025-08-29 17:34:32.748874 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 17:34:32.748881 | orchestrator | Friday 29 August 2025 17:32:48 +0000 (0:00:00.138) 0:00:07.473 ********* 2025-08-29 17:34:32.748888 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.748895 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:32.748903 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:32.748910 | orchestrator | 2025-08-29 17:34:32.748917 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 17:34:32.748925 | orchestrator | Friday 29 August 2025 17:32:48 +0000 (0:00:00.314) 0:00:07.788 ********* 2025-08-29 17:34:32.748937 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:32.748944 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:32.748951 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:32.748958 | orchestrator | 2025-08-29 17:34:32.748965 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 17:34:32.748973 | orchestrator | Friday 29 August 2025 17:32:48 +0000 (0:00:00.332) 0:00:08.120 ********* 2025-08-29 17:34:32.748980 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.748987 | orchestrator | 2025-08-29 17:34:32.748994 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 17:34:32.749001 | orchestrator | Friday 29 August 2025 17:32:49 +0000 (0:00:00.392) 0:00:08.513 ********* 2025-08-29 17:34:32.749009 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.749016 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:32.749023 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:32.749031 | orchestrator | 2025-08-29 17:34:32.749038 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 17:34:32.749045 | orchestrator | Friday 29 August 2025 17:32:49 +0000 (0:00:00.337) 0:00:08.851 ********* 2025-08-29 17:34:32.749053 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:32.749078 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:32.749102 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:32.749114 | orchestrator | 2025-08-29 17:34:32.749126 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 17:34:32.749139 | orchestrator | Friday 29 August 2025 17:32:49 +0000 (0:00:00.357) 0:00:09.209 ********* 2025-08-29 17:34:32.749152 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.749159 | orchestrator | 2025-08-29 17:34:32.749167 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 17:34:32.749174 | orchestrator | Friday 29 August 2025 17:32:50 +0000 (0:00:00.134) 0:00:09.343 ********* 2025-08-29 17:34:32.749181 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.749188 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:32.749195 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:32.749202 | orchestrator | 2025-08-29 17:34:32.749210 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 17:34:32.749222 | orchestrator | Friday 29 August 2025 17:32:50 +0000 (0:00:00.344) 0:00:09.687 ********* 2025-08-29 17:34:32.749230 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:32.749237 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:32.749244 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:32.749252 | orchestrator | 2025-08-29 17:34:32.749263 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 17:34:32.749271 | orchestrator | Friday 29 August 2025 17:32:50 +0000 (0:00:00.529) 0:00:10.216 ********* 2025-08-29 17:34:32.749278 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.749285 | orchestrator | 2025-08-29 17:34:32.749292 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 17:34:32.749300 | orchestrator | Friday 29 August 2025 17:32:51 +0000 (0:00:00.130) 0:00:10.347 ********* 2025-08-29 17:34:32.749307 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.749314 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:32.749321 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:32.749328 | orchestrator | 2025-08-29 17:34:32.749335 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 17:34:32.749343 | orchestrator | Friday 29 August 2025 17:32:51 +0000 (0:00:00.317) 0:00:10.665 ********* 2025-08-29 17:34:32.749350 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:32.749357 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:32.749386 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:32.749394 | orchestrator | 2025-08-29 17:34:32.749402 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 17:34:32.749409 | orchestrator | Friday 29 August 2025 17:32:51 +0000 (0:00:00.368) 0:00:11.033 ********* 2025-08-29 17:34:32.749416 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.749430 | orchestrator | 2025-08-29 17:34:32.749437 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 17:34:32.749444 | orchestrator | Friday 29 August 2025 17:32:51 +0000 (0:00:00.139) 0:00:11.173 ********* 2025-08-29 17:34:32.749451 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.749459 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:32.749466 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:32.749473 | orchestrator | 2025-08-29 17:34:32.749480 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 17:34:32.749487 | orchestrator | Friday 29 August 2025 17:32:52 +0000 (0:00:00.327) 0:00:11.500 ********* 2025-08-29 17:34:32.749495 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:32.749502 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:32.749509 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:32.749516 | orchestrator | 2025-08-29 17:34:32.749523 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 17:34:32.749530 | orchestrator | Friday 29 August 2025 17:32:52 +0000 (0:00:00.595) 0:00:12.095 ********* 2025-08-29 17:34:32.749537 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.749545 | orchestrator | 2025-08-29 17:34:32.749552 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 17:34:32.749559 | orchestrator | Friday 29 August 2025 17:32:52 +0000 (0:00:00.129) 0:00:12.225 ********* 2025-08-29 17:34:32.749567 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.749574 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:32.749581 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:32.749588 | orchestrator | 2025-08-29 17:34:32.749595 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 17:34:32.749602 | orchestrator | Friday 29 August 2025 17:32:53 +0000 (0:00:00.299) 0:00:12.525 ********* 2025-08-29 17:34:32.749610 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:32.749617 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:32.749624 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:32.749631 | orchestrator | 2025-08-29 17:34:32.749638 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 17:34:32.749646 | orchestrator | Friday 29 August 2025 17:32:53 +0000 (0:00:00.313) 0:00:12.838 ********* 2025-08-29 17:34:32.749653 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.749660 | orchestrator | 2025-08-29 17:34:32.749667 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 17:34:32.749674 | orchestrator | Friday 29 August 2025 17:32:53 +0000 (0:00:00.144) 0:00:12.982 ********* 2025-08-29 17:34:32.749682 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.749689 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:32.749696 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:32.749703 | orchestrator | 2025-08-29 17:34:32.749710 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-08-29 17:34:32.749718 | orchestrator | Friday 29 August 2025 17:32:54 +0000 (0:00:00.585) 0:00:13.568 ********* 2025-08-29 17:34:32.749725 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:32.749732 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:34:32.749739 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:34:32.749746 | orchestrator | 2025-08-29 17:34:32.749753 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-08-29 17:34:32.749761 | orchestrator | Friday 29 August 2025 17:32:56 +0000 (0:00:01.731) 0:00:15.299 ********* 2025-08-29 17:34:32.749768 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 17:34:32.749775 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 17:34:32.749782 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 17:34:32.749790 | orchestrator | 2025-08-29 17:34:32.749797 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-08-29 17:34:32.749809 | orchestrator | Friday 29 August 2025 17:32:58 +0000 (0:00:01.953) 0:00:17.253 ********* 2025-08-29 17:34:32.749817 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 17:34:32.749824 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 17:34:32.749831 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 17:34:32.749838 | orchestrator | 2025-08-29 17:34:32.749846 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-08-29 17:34:32.749857 | orchestrator | Friday 29 August 2025 17:33:00 +0000 (0:00:02.331) 0:00:19.584 ********* 2025-08-29 17:34:32.749869 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 17:34:32.749876 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 17:34:32.749884 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 17:34:32.749891 | orchestrator | 2025-08-29 17:34:32.749898 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-08-29 17:34:32.749905 | orchestrator | Friday 29 August 2025 17:33:02 +0000 (0:00:02.130) 0:00:21.715 ********* 2025-08-29 17:34:32.749912 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.749919 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:32.749926 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:32.749933 | orchestrator | 2025-08-29 17:34:32.749940 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-08-29 17:34:32.749951 | orchestrator | Friday 29 August 2025 17:33:02 +0000 (0:00:00.324) 0:00:22.039 ********* 2025-08-29 17:34:32.749964 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.749977 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:32.749989 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:32.749997 | orchestrator | 2025-08-29 17:34:32.750004 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 17:34:32.750011 | orchestrator | Friday 29 August 2025 17:33:03 +0000 (0:00:00.308) 0:00:22.348 ********* 2025-08-29 17:34:32.750062 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:34:32.750070 | orchestrator | 2025-08-29 17:34:32.750077 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-08-29 17:34:32.750085 | orchestrator | Friday 29 August 2025 17:33:03 +0000 (0:00:00.604) 0:00:22.953 ********* 2025-08-29 17:34:32.750093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 17:34:32.750127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 17:34:32.750137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 17:34:32.750150 | orchestrator | 2025-08-29 17:34:32.750158 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-08-29 17:34:32.750165 | orchestrator | Friday 29 August 2025 17:33:05 +0000 (0:00:01.961) 0:00:24.914 ********* 2025-08-29 17:34:32.750183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 17:34:32.750192 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.750204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 17:34:32.750221 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:32.750229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 17:34:32.750237 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:32.750244 | orchestrator | 2025-08-29 17:34:32.750252 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-08-29 17:34:32.750269 | orchestrator | Friday 29 August 2025 17:33:06 +0000 (0:00:00.835) 0:00:25.749 ********* 2025-08-29 17:34:32.750287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 17:34:32.750295 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.750303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 17:34:32.750316 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:32.750334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 17:34:32.750343 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:32.750350 | orchestrator | 2025-08-29 17:34:32.750357 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-08-29 17:34:32.750408 | orchestrator | Friday 29 August 2025 17:33:07 +0000 (0:00:00.920) 0:00:26.670 ********* 2025-08-29 17:34:32.750424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 17:34:32.750458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 17:34:32.750468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 17:34:32.750483 | orchestrator | 2025-08-29 17:34:32.750491 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 17:34:32.750498 | orchestrator | Friday 29 August 2025 17:33:09 +0000 (0:00:01.990) 0:00:28.661 ********* 2025-08-29 17:34:32.750505 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:32.750512 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:32.750520 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:32.750527 | orchestrator | 2025-08-29 17:34:32.750534 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 17:34:32.750541 | orchestrator | Friday 29 August 2025 17:33:09 +0000 (0:00:00.374) 0:00:29.035 ********* 2025-08-29 17:34:32.750549 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:34:32.750556 | orchestrator | 2025-08-29 17:34:32.750563 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-08-29 17:34:32.750575 | orchestrator | Friday 29 August 2025 17:33:10 +0000 (0:00:00.616) 0:00:29.652 ********* 2025-08-29 17:34:32.750582 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:32.750589 | orchestrator | 2025-08-29 17:34:32.750600 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-08-29 17:34:32.750607 | orchestrator | Friday 29 August 2025 17:33:12 +0000 (0:00:02.204) 0:00:31.856 ********* 2025-08-29 17:34:32.750615 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:32.750622 | orchestrator | 2025-08-29 17:34:32.750629 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-08-29 17:34:32.750636 | orchestrator | Friday 29 August 2025 17:33:15 +0000 (0:00:02.525) 0:00:34.382 ********* 2025-08-29 17:34:32.750643 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:32.750651 | orchestrator | 2025-08-29 17:34:32.750658 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 17:34:32.750665 | orchestrator | Friday 29 August 2025 17:33:29 +0000 (0:00:14.515) 0:00:48.897 ********* 2025-08-29 17:34:32.750672 | orchestrator | 2025-08-29 17:34:32.750679 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 17:34:32.750687 | orchestrator | Friday 29 August 2025 17:33:29 +0000 (0:00:00.067) 0:00:48.965 ********* 2025-08-29 17:34:32.750694 | orchestrator | 2025-08-29 17:34:32.750701 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 17:34:32.750708 | orchestrator | Friday 29 August 2025 17:33:29 +0000 (0:00:00.095) 0:00:49.061 ********* 2025-08-29 17:34:32.750715 | orchestrator | 2025-08-29 17:34:32.750723 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-08-29 17:34:32.750735 | orchestrator | Friday 29 August 2025 17:33:29 +0000 (0:00:00.082) 0:00:49.144 ********* 2025-08-29 17:34:32.750742 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:32.750749 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:34:32.750756 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:34:32.750764 | orchestrator | 2025-08-29 17:34:32.750771 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:34:32.750778 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-08-29 17:34:32.750786 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-08-29 17:34:32.750793 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-08-29 17:34:32.750801 | orchestrator | 2025-08-29 17:34:32.750808 | orchestrator | 2025-08-29 17:34:32.750815 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:34:32.750822 | orchestrator | Friday 29 August 2025 17:34:30 +0000 (0:01:00.804) 0:01:49.948 ********* 2025-08-29 17:34:32.750829 | orchestrator | =============================================================================== 2025-08-29 17:34:32.750836 | orchestrator | horizon : Restart horizon container ------------------------------------ 60.80s 2025-08-29 17:34:32.750844 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.52s 2025-08-29 17:34:32.750851 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.53s 2025-08-29 17:34:32.750858 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.33s 2025-08-29 17:34:32.750865 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.20s 2025-08-29 17:34:32.750872 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.13s 2025-08-29 17:34:32.750879 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.99s 2025-08-29 17:34:32.750887 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.96s 2025-08-29 17:34:32.750894 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.95s 2025-08-29 17:34:32.750901 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.73s 2025-08-29 17:34:32.750908 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.22s 2025-08-29 17:34:32.750916 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.92s 2025-08-29 17:34:32.750924 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.84s 2025-08-29 17:34:32.750931 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2025-08-29 17:34:32.750938 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2025-08-29 17:34:32.750946 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2025-08-29 17:34:32.750953 | orchestrator | horizon : Update policy file name --------------------------------------- 0.60s 2025-08-29 17:34:32.750960 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.59s 2025-08-29 17:34:32.750967 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2025-08-29 17:34:32.750974 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2025-08-29 17:34:32.750982 | orchestrator | 2025-08-29 17:34:32 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:32.750990 | orchestrator | 2025-08-29 17:34:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:35.796073 | orchestrator | 2025-08-29 17:34:35 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:34:35.798411 | orchestrator | 2025-08-29 17:34:35 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:34:35.802068 | orchestrator | 2025-08-29 17:34:35 | INFO  | Task d4cf479a-c74b-4edc-80d7-625bf007e37f is in state STARTED 2025-08-29 17:34:35.805079 | orchestrator | 2025-08-29 17:34:35 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:35.805525 | orchestrator | 2025-08-29 17:34:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:38.838940 | orchestrator | 2025-08-29 17:34:38 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:34:38.840931 | orchestrator | 2025-08-29 17:34:38 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:34:38.841860 | orchestrator | 2025-08-29 17:34:38 | INFO  | Task d4cf479a-c74b-4edc-80d7-625bf007e37f is in state STARTED 2025-08-29 17:34:38.843389 | orchestrator | 2025-08-29 17:34:38 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:38.843403 | orchestrator | 2025-08-29 17:34:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:41.883626 | orchestrator | 2025-08-29 17:34:41 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:34:41.885490 | orchestrator | 2025-08-29 17:34:41 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:34:41.888118 | orchestrator | 2025-08-29 17:34:41 | INFO  | Task d4cf479a-c74b-4edc-80d7-625bf007e37f is in state STARTED 2025-08-29 17:34:41.890436 | orchestrator | 2025-08-29 17:34:41 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:41.890495 | orchestrator | 2025-08-29 17:34:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:44.939037 | orchestrator | 2025-08-29 17:34:44 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:34:44.939265 | orchestrator | 2025-08-29 17:34:44 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:34:44.940248 | orchestrator | 2025-08-29 17:34:44 | INFO  | Task d4cf479a-c74b-4edc-80d7-625bf007e37f is in state STARTED 2025-08-29 17:34:44.941284 | orchestrator | 2025-08-29 17:34:44 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:44.941331 | orchestrator | 2025-08-29 17:34:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:47.974631 | orchestrator | 2025-08-29 17:34:47 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:34:47.975606 | orchestrator | 2025-08-29 17:34:47 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:34:47.977169 | orchestrator | 2025-08-29 17:34:47 | INFO  | Task d4cf479a-c74b-4edc-80d7-625bf007e37f is in state STARTED 2025-08-29 17:34:47.978767 | orchestrator | 2025-08-29 17:34:47 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:47.978855 | orchestrator | 2025-08-29 17:34:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:51.005340 | orchestrator | 2025-08-29 17:34:51 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:34:51.007191 | orchestrator | 2025-08-29 17:34:51 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:34:51.007664 | orchestrator | 2025-08-29 17:34:51 | INFO  | Task d4cf479a-c74b-4edc-80d7-625bf007e37f is in state STARTED 2025-08-29 17:34:51.008577 | orchestrator | 2025-08-29 17:34:51 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:51.008603 | orchestrator | 2025-08-29 17:34:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:54.064443 | orchestrator | 2025-08-29 17:34:54 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:34:54.064521 | orchestrator | 2025-08-29 17:34:54 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:34:54.064527 | orchestrator | 2025-08-29 17:34:54 | INFO  | Task d4cf479a-c74b-4edc-80d7-625bf007e37f is in state STARTED 2025-08-29 17:34:54.069117 | orchestrator | 2025-08-29 17:34:54 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:54.069180 | orchestrator | 2025-08-29 17:34:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:34:57.099454 | orchestrator | 2025-08-29 17:34:57 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:34:57.101244 | orchestrator | 2025-08-29 17:34:57 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:34:57.102999 | orchestrator | 2025-08-29 17:34:57 | INFO  | Task d4cf479a-c74b-4edc-80d7-625bf007e37f is in state STARTED 2025-08-29 17:34:57.104946 | orchestrator | 2025-08-29 17:34:57 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:34:57.104964 | orchestrator | 2025-08-29 17:34:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:00.152344 | orchestrator | 2025-08-29 17:35:00 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:00.155491 | orchestrator | 2025-08-29 17:35:00 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:00.156798 | orchestrator | 2025-08-29 17:35:00 | INFO  | Task d4cf479a-c74b-4edc-80d7-625bf007e37f is in state SUCCESS 2025-08-29 17:35:00.157854 | orchestrator | 2025-08-29 17:35:00 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:35:00.157933 | orchestrator | 2025-08-29 17:35:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:03.205654 | orchestrator | 2025-08-29 17:35:03 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:03.206281 | orchestrator | 2025-08-29 17:35:03 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:03.207549 | orchestrator | 2025-08-29 17:35:03 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:03.208639 | orchestrator | 2025-08-29 17:35:03 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:03.209706 | orchestrator | 2025-08-29 17:35:03 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:35:03.209735 | orchestrator | 2025-08-29 17:35:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:06.317618 | orchestrator | 2025-08-29 17:35:06 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:06.317726 | orchestrator | 2025-08-29 17:35:06 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:06.317743 | orchestrator | 2025-08-29 17:35:06 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:06.317756 | orchestrator | 2025-08-29 17:35:06 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:06.317768 | orchestrator | 2025-08-29 17:35:06 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:35:06.317779 | orchestrator | 2025-08-29 17:35:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:09.305524 | orchestrator | 2025-08-29 17:35:09 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:09.305966 | orchestrator | 2025-08-29 17:35:09 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:09.306527 | orchestrator | 2025-08-29 17:35:09 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:09.307186 | orchestrator | 2025-08-29 17:35:09 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:09.309610 | orchestrator | 2025-08-29 17:35:09 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:35:09.309627 | orchestrator | 2025-08-29 17:35:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:12.355465 | orchestrator | 2025-08-29 17:35:12 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:12.356750 | orchestrator | 2025-08-29 17:35:12 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:12.357843 | orchestrator | 2025-08-29 17:35:12 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:12.360463 | orchestrator | 2025-08-29 17:35:12 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:12.361487 | orchestrator | 2025-08-29 17:35:12 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:35:12.361536 | orchestrator | 2025-08-29 17:35:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:15.396813 | orchestrator | 2025-08-29 17:35:15 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:15.398457 | orchestrator | 2025-08-29 17:35:15 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:15.401323 | orchestrator | 2025-08-29 17:35:15 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:15.403690 | orchestrator | 2025-08-29 17:35:15 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:15.404598 | orchestrator | 2025-08-29 17:35:15 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:35:15.404630 | orchestrator | 2025-08-29 17:35:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:18.439888 | orchestrator | 2025-08-29 17:35:18 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:18.441945 | orchestrator | 2025-08-29 17:35:18 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:18.443888 | orchestrator | 2025-08-29 17:35:18 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:18.445057 | orchestrator | 2025-08-29 17:35:18 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:18.446609 | orchestrator | 2025-08-29 17:35:18 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:35:18.447203 | orchestrator | 2025-08-29 17:35:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:21.516326 | orchestrator | 2025-08-29 17:35:21 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:21.517825 | orchestrator | 2025-08-29 17:35:21 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:21.518580 | orchestrator | 2025-08-29 17:35:21 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:21.519319 | orchestrator | 2025-08-29 17:35:21 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:21.522507 | orchestrator | 2025-08-29 17:35:21 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:35:21.522550 | orchestrator | 2025-08-29 17:35:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:24.567526 | orchestrator | 2025-08-29 17:35:24 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:24.568097 | orchestrator | 2025-08-29 17:35:24 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:24.570429 | orchestrator | 2025-08-29 17:35:24 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:24.571658 | orchestrator | 2025-08-29 17:35:24 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:24.573611 | orchestrator | 2025-08-29 17:35:24 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:35:24.573659 | orchestrator | 2025-08-29 17:35:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:27.609209 | orchestrator | 2025-08-29 17:35:27 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:27.610100 | orchestrator | 2025-08-29 17:35:27 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:27.611001 | orchestrator | 2025-08-29 17:35:27 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:27.614215 | orchestrator | 2025-08-29 17:35:27 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:27.614945 | orchestrator | 2025-08-29 17:35:27 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:35:27.614979 | orchestrator | 2025-08-29 17:35:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:30.670924 | orchestrator | 2025-08-29 17:35:30 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:30.675019 | orchestrator | 2025-08-29 17:35:30 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:30.678732 | orchestrator | 2025-08-29 17:35:30 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:30.681444 | orchestrator | 2025-08-29 17:35:30 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:30.684604 | orchestrator | 2025-08-29 17:35:30 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:35:30.684649 | orchestrator | 2025-08-29 17:35:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:33.731594 | orchestrator | 2025-08-29 17:35:33 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:33.734669 | orchestrator | 2025-08-29 17:35:33 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:33.737723 | orchestrator | 2025-08-29 17:35:33 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:33.739870 | orchestrator | 2025-08-29 17:35:33 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:33.742175 | orchestrator | 2025-08-29 17:35:33 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:35:33.742209 | orchestrator | 2025-08-29 17:35:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:36.793884 | orchestrator | 2025-08-29 17:35:36 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:36.793976 | orchestrator | 2025-08-29 17:35:36 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:36.793986 | orchestrator | 2025-08-29 17:35:36 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:36.793994 | orchestrator | 2025-08-29 17:35:36 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:36.794071 | orchestrator | 2025-08-29 17:35:36 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state STARTED 2025-08-29 17:35:36.794081 | orchestrator | 2025-08-29 17:35:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:39.833624 | orchestrator | 2025-08-29 17:35:39 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:39.835708 | orchestrator | 2025-08-29 17:35:39 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:39.839164 | orchestrator | 2025-08-29 17:35:39 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:35:39.842362 | orchestrator | 2025-08-29 17:35:39 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:39.847245 | orchestrator | 2025-08-29 17:35:39 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:39.854573 | orchestrator | 2025-08-29 17:35:39 | INFO  | Task 1a2c2507-5c73-4f4e-bbfd-e13d8765caab is in state SUCCESS 2025-08-29 17:35:39.857470 | orchestrator | 2025-08-29 17:35:39.857558 | orchestrator | 2025-08-29 17:35:39.857649 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:35:39.857669 | orchestrator | 2025-08-29 17:35:39.857680 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:35:39.857692 | orchestrator | Friday 29 August 2025 17:34:34 +0000 (0:00:00.275) 0:00:00.275 ********* 2025-08-29 17:35:39.857703 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:39.858141 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:35:39.858160 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:35:39.858167 | orchestrator | 2025-08-29 17:35:39.858175 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:35:39.858183 | orchestrator | Friday 29 August 2025 17:34:34 +0000 (0:00:00.384) 0:00:00.659 ********* 2025-08-29 17:35:39.858191 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-08-29 17:35:39.858199 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-08-29 17:35:39.858207 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-08-29 17:35:39.858214 | orchestrator | 2025-08-29 17:35:39.858222 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-08-29 17:35:39.858229 | orchestrator | 2025-08-29 17:35:39.858237 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-08-29 17:35:39.858244 | orchestrator | Friday 29 August 2025 17:34:35 +0000 (0:00:00.747) 0:00:01.406 ********* 2025-08-29 17:35:39.858252 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:39.858259 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:35:39.858266 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:35:39.858274 | orchestrator | 2025-08-29 17:35:39.858281 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:35:39.858289 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:35:39.858299 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:35:39.858307 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:35:39.858314 | orchestrator | 2025-08-29 17:35:39.858321 | orchestrator | 2025-08-29 17:35:39.858329 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:35:39.858336 | orchestrator | Friday 29 August 2025 17:34:58 +0000 (0:00:23.745) 0:00:25.152 ********* 2025-08-29 17:35:39.858343 | orchestrator | =============================================================================== 2025-08-29 17:35:39.858352 | orchestrator | Waiting for Keystone public port to be UP ------------------------------ 23.75s 2025-08-29 17:35:39.858359 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2025-08-29 17:35:39.858413 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2025-08-29 17:35:39.858422 | orchestrator | 2025-08-29 17:35:39.858430 | orchestrator | 2025-08-29 17:35:39.858437 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:35:39.858444 | orchestrator | 2025-08-29 17:35:39.858451 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:35:39.858458 | orchestrator | Friday 29 August 2025 17:32:41 +0000 (0:00:00.298) 0:00:00.298 ********* 2025-08-29 17:35:39.858465 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:39.858478 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:35:39.858489 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:35:39.858500 | orchestrator | 2025-08-29 17:35:39.858510 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:35:39.858536 | orchestrator | Friday 29 August 2025 17:32:41 +0000 (0:00:00.307) 0:00:00.606 ********* 2025-08-29 17:35:39.858547 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-08-29 17:35:39.858558 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-08-29 17:35:39.858569 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-08-29 17:35:39.858580 | orchestrator | 2025-08-29 17:35:39.858592 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-08-29 17:35:39.858603 | orchestrator | 2025-08-29 17:35:39.858615 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 17:35:39.858627 | orchestrator | Friday 29 August 2025 17:32:41 +0000 (0:00:00.567) 0:00:01.173 ********* 2025-08-29 17:35:39.858641 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:35:39.858650 | orchestrator | 2025-08-29 17:35:39.858657 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-08-29 17:35:39.858665 | orchestrator | Friday 29 August 2025 17:32:42 +0000 (0:00:00.614) 0:00:01.788 ********* 2025-08-29 17:35:39.858732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.858753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.858781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.858804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 17:35:39.858820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 17:35:39.858834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 17:35:39.858854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.858864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.858885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.858897 | orchestrator | 2025-08-29 17:35:39.858914 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-08-29 17:35:39.858930 | orchestrator | Friday 29 August 2025 17:32:44 +0000 (0:00:01.721) 0:00:03.509 ********* 2025-08-29 17:35:39.858942 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-08-29 17:35:39.858955 | orchestrator | 2025-08-29 17:35:39.858967 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-08-29 17:35:39.858980 | orchestrator | Friday 29 August 2025 17:32:45 +0000 (0:00:00.904) 0:00:04.414 ********* 2025-08-29 17:35:39.858992 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:39.859004 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:35:39.859016 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:35:39.859028 | orchestrator | 2025-08-29 17:35:39.859040 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-08-29 17:35:39.859056 | orchestrator | Friday 29 August 2025 17:32:45 +0000 (0:00:00.592) 0:00:05.007 ********* 2025-08-29 17:35:39.859078 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 17:35:39.859097 | orchestrator | 2025-08-29 17:35:39.859112 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 17:35:39.859123 | orchestrator | Friday 29 August 2025 17:32:46 +0000 (0:00:00.778) 0:00:05.785 ********* 2025-08-29 17:35:39.859136 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:35:39.859148 | orchestrator | 2025-08-29 17:35:39.859161 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-08-29 17:35:39.859172 | orchestrator | Friday 29 August 2025 17:32:47 +0000 (0:00:00.596) 0:00:06.382 ********* 2025-08-29 17:35:39.859185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.859441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.859463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.859478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 17:35:39.859487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 17:35:39.859497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 17:35:39.859519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.859540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.859559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.859573 | orchestrator | 2025-08-29 17:35:39.859586 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-08-29 17:35:39.859599 | orchestrator | Friday 29 August 2025 17:32:50 +0000 (0:00:03.387) 0:00:09.770 ********* 2025-08-29 17:35:39.859636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 17:35:39.859645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:35:39.859653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:35:39.859661 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:35:39.859683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 17:35:39.859692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:35:39.859699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:35:39.859707 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:35:39.859718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 17:35:39.859726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:35:39.859745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:35:39.859753 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:35:39.859760 | orchestrator | 2025-08-29 17:35:39.859768 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-08-29 17:35:39.859775 | orchestrator | Friday 29 August 2025 17:32:51 +0000 (0:00:00.825) 0:00:10.595 ********* 2025-08-29 17:35:39.859783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 17:35:39.859791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:35:39.859803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:35:39.859811 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:35:39.859823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 17:35:39.859858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:35:39.859875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:35:39.859890 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:35:39.859903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 17:35:39.859922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:35:39.859934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:35:39.859947 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:35:39.859960 | orchestrator | 2025-08-29 17:35:39.859991 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-08-29 17:35:39.860014 | orchestrator | Friday 29 August 2025 17:32:52 +0000 (0:00:00.792) 0:00:11.388 ********* 2025-08-29 17:35:39.860039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.860050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.860060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.860074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 17:35:39.860088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 17:35:39.860110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 17:35:39.860132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.860149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.860170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.860184 | orchestrator | 2025-08-29 17:35:39.860197 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-08-29 17:35:39.860211 | orchestrator | Friday 29 August 2025 17:32:55 +0000 (0:00:03.394) 0:00:14.782 ********* 2025-08-29 17:35:39.860230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.860250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:35:39.860266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.860275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:35:39.860285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.860305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:35:39.860325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.860337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.860356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.860428 | orchestrator | 2025-08-29 17:35:39.860439 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-08-29 17:35:39.860446 | orchestrator | Friday 29 August 2025 17:33:01 +0000 (0:00:05.885) 0:00:20.667 ********* 2025-08-29 17:35:39.860454 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:35:39.860464 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:35:39.860476 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:35:39.860486 | orchestrator | 2025-08-29 17:35:39.860497 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-08-29 17:35:39.860508 | orchestrator | Friday 29 August 2025 17:33:02 +0000 (0:00:01.463) 0:00:22.131 ********* 2025-08-29 17:35:39.860520 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:35:39.860531 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:35:39.860542 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:35:39.860552 | orchestrator | 2025-08-29 17:35:39.860563 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-08-29 17:35:39.860576 | orchestrator | Friday 29 August 2025 17:33:03 +0000 (0:00:00.613) 0:00:22.744 ********* 2025-08-29 17:35:39.860588 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:35:39.860599 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:35:39.860609 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:35:39.860620 | orchestrator | 2025-08-29 17:35:39.860631 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-08-29 17:35:39.860642 | orchestrator | Friday 29 August 2025 17:33:03 +0000 (0:00:00.305) 0:00:23.049 ********* 2025-08-29 17:35:39.860653 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:35:39.860664 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:35:39.860675 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:35:39.860685 | orchestrator | 2025-08-29 17:35:39.860696 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-08-29 17:35:39.860705 | orchestrator | Friday 29 August 2025 17:33:04 +0000 (0:00:00.593) 0:00:23.643 ********* 2025-08-29 17:35:39.860737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.860751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:35:39.860777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.860791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:35:39.860804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.860831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:35:39.860843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.860855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.860869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.860882 | orchestrator | 2025-08-29 17:35:39.860894 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 17:35:39.860906 | orchestrator | Friday 29 August 2025 17:33:06 +0000 (0:00:02.595) 0:00:26.239 ********* 2025-08-29 17:35:39.860918 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:35:39.860930 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:35:39.860942 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:35:39.860954 | orchestrator | 2025-08-29 17:35:39.860966 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-08-29 17:35:39.860978 | orchestrator | Friday 29 August 2025 17:33:07 +0000 (0:00:00.349) 0:00:26.588 ********* 2025-08-29 17:35:39.860990 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 17:35:39.861003 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 17:35:39.861017 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 17:35:39.861029 | orchestrator | 2025-08-29 17:35:39.861041 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-08-29 17:35:39.861057 | orchestrator | Friday 29 August 2025 17:33:09 +0000 (0:00:02.061) 0:00:28.650 ********* 2025-08-29 17:35:39.861064 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 17:35:39.861072 | orchestrator | 2025-08-29 17:35:39.861079 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-08-29 17:35:39.861087 | orchestrator | Friday 29 August 2025 17:33:10 +0000 (0:00:00.998) 0:00:29.648 ********* 2025-08-29 17:35:39.861094 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:35:39.861101 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:35:39.861108 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:35:39.861116 | orchestrator | 2025-08-29 17:35:39.861123 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-08-29 17:35:39.861130 | orchestrator | Friday 29 August 2025 17:33:11 +0000 (0:00:00.951) 0:00:30.600 ********* 2025-08-29 17:35:39.861137 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 17:35:39.861144 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 17:35:39.861151 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 17:35:39.861159 | orchestrator | 2025-08-29 17:35:39.861166 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-08-29 17:35:39.861174 | orchestrator | Friday 29 August 2025 17:33:12 +0000 (0:00:01.316) 0:00:31.917 ********* 2025-08-29 17:35:39.861181 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:39.861189 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:35:39.861196 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:35:39.861203 | orchestrator | 2025-08-29 17:35:39.861210 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-08-29 17:35:39.861222 | orchestrator | Friday 29 August 2025 17:33:12 +0000 (0:00:00.325) 0:00:32.243 ********* 2025-08-29 17:35:39.861235 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 17:35:39.861247 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 17:35:39.861264 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 17:35:39.861275 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 17:35:39.861287 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 17:35:39.861297 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 17:35:39.861307 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 17:35:39.861319 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 17:35:39.861329 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 17:35:39.861340 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 17:35:39.861352 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 17:35:39.861363 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 17:35:39.861397 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 17:35:39.861411 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 17:35:39.861424 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 17:35:39.861435 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 17:35:39.861447 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 17:35:39.861459 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 17:35:39.861493 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 17:35:39.861507 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 17:35:39.861519 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 17:35:39.861530 | orchestrator | 2025-08-29 17:35:39.861538 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-08-29 17:35:39.861545 | orchestrator | Friday 29 August 2025 17:33:21 +0000 (0:00:08.624) 0:00:40.867 ********* 2025-08-29 17:35:39.861553 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 17:35:39.861560 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 17:35:39.861567 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 17:35:39.861575 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 17:35:39.861582 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 17:35:39.861589 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 17:35:39.861597 | orchestrator | 2025-08-29 17:35:39.861604 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-08-29 17:35:39.861611 | orchestrator | Friday 29 August 2025 17:33:24 +0000 (0:00:02.616) 0:00:43.483 ********* 2025-08-29 17:35:39.861620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.861634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.861648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:35:39.861662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 17:35:39.861671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 17:35:39.861678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 17:35:39.861691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.861700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.861707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 17:35:39.861719 | orchestrator | 2025-08-29 17:35:39.861727 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 17:35:39.861735 | orchestrator | Friday 29 August 2025 17:33:26 +0000 (0:00:02.176) 0:00:45.660 ********* 2025-08-29 17:35:39.861742 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:35:39.861750 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:35:39.861758 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:35:39.861765 | orchestrator | 2025-08-29 17:35:39.861776 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-08-29 17:35:39.861784 | orchestrator | Friday 29 August 2025 17:33:26 +0000 (0:00:00.277) 0:00:45.938 ********* 2025-08-29 17:35:39.861791 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:35:39.861799 | orchestrator | 2025-08-29 17:35:39.861806 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-08-29 17:35:39.861813 | orchestrator | Friday 29 August 2025 17:33:28 +0000 (0:00:02.056) 0:00:47.995 ********* 2025-08-29 17:35:39.861821 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:35:39.861828 | orchestrator | 2025-08-29 17:35:39.861835 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-08-29 17:35:39.861843 | orchestrator | Friday 29 August 2025 17:33:30 +0000 (0:00:01.949) 0:00:49.944 ********* 2025-08-29 17:35:39.861850 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:35:39.861857 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:39.861865 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:35:39.861872 | orchestrator | 2025-08-29 17:35:39.861879 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-08-29 17:35:39.861886 | orchestrator | Friday 29 August 2025 17:33:31 +0000 (0:00:00.983) 0:00:50.928 ********* 2025-08-29 17:35:39.861893 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:39.861901 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:35:39.861908 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:35:39.861915 | orchestrator | 2025-08-29 17:35:39.861922 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-08-29 17:35:39.861929 | orchestrator | Friday 29 August 2025 17:33:32 +0000 (0:00:00.713) 0:00:51.641 ********* 2025-08-29 17:35:39.861936 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:35:39.861943 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:35:39.861951 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:35:39.861958 | orchestrator | 2025-08-29 17:35:39.861965 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-08-29 17:35:39.861973 | orchestrator | Friday 29 August 2025 17:33:32 +0000 (0:00:00.377) 0:00:52.019 ********* 2025-08-29 17:35:39.861980 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:35:39.861987 | orchestrator | 2025-08-29 17:35:39.861994 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-08-29 17:35:39.862001 | orchestrator | Friday 29 August 2025 17:33:45 +0000 (0:00:12.628) 0:01:04.647 ********* 2025-08-29 17:35:39.862008 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:35:39.862050 | orchestrator | 2025-08-29 17:35:39.862058 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 17:35:39.862066 | orchestrator | Friday 29 August 2025 17:33:54 +0000 (0:00:09.399) 0:01:14.047 ********* 2025-08-29 17:35:39.862073 | orchestrator | 2025-08-29 17:35:39.862080 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 17:35:39.862088 | orchestrator | Friday 29 August 2025 17:33:54 +0000 (0:00:00.062) 0:01:14.109 ********* 2025-08-29 17:35:39.862095 | orchestrator | 2025-08-29 17:35:39.862103 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 17:35:39.862115 | orchestrator | Friday 29 August 2025 17:33:54 +0000 (0:00:00.063) 0:01:14.172 ********* 2025-08-29 17:35:39.862123 | orchestrator | 2025-08-29 17:35:39.862130 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-08-29 17:35:39.862137 | orchestrator | Friday 29 August 2025 17:33:54 +0000 (0:00:00.064) 0:01:14.237 ********* 2025-08-29 17:35:39.862145 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:35:39.862152 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:35:39.862159 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:35:39.862167 | orchestrator | 2025-08-29 17:35:39.862174 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-08-29 17:35:39.862181 | orchestrator | Friday 29 August 2025 17:34:23 +0000 (0:00:29.003) 0:01:43.240 ********* 2025-08-29 17:35:39.862192 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:35:39.862200 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:35:39.862207 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:35:39.862214 | orchestrator | 2025-08-29 17:35:39.862222 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-08-29 17:35:39.862229 | orchestrator | Friday 29 August 2025 17:34:37 +0000 (0:00:13.135) 0:01:56.376 ********* 2025-08-29 17:35:39.862236 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:35:39.862243 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:35:39.862251 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:35:39.862258 | orchestrator | 2025-08-29 17:35:39.862265 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 17:35:39.862272 | orchestrator | Friday 29 August 2025 17:34:50 +0000 (0:00:13.345) 0:02:09.721 ********* 2025-08-29 17:35:39.862280 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:35:39.862287 | orchestrator | 2025-08-29 17:35:39.862294 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-08-29 17:35:39.862302 | orchestrator | Friday 29 August 2025 17:34:51 +0000 (0:00:00.993) 0:02:10.715 ********* 2025-08-29 17:35:39.862309 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:39.862316 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:35:39.862323 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:35:39.862330 | orchestrator | 2025-08-29 17:35:39.862338 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-08-29 17:35:39.862345 | orchestrator | Friday 29 August 2025 17:34:52 +0000 (0:00:00.845) 0:02:11.560 ********* 2025-08-29 17:35:39.862352 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:35:39.862359 | orchestrator | 2025-08-29 17:35:39.862366 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-08-29 17:35:39.862416 | orchestrator | Friday 29 August 2025 17:34:54 +0000 (0:00:01.781) 0:02:13.342 ********* 2025-08-29 17:35:39.862424 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-08-29 17:35:39.862431 | orchestrator | 2025-08-29 17:35:39.862439 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-08-29 17:35:39.862446 | orchestrator | Friday 29 August 2025 17:35:02 +0000 (0:00:08.655) 0:02:21.997 ********* 2025-08-29 17:35:39.862453 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-08-29 17:35:39.862461 | orchestrator | 2025-08-29 17:35:39.862473 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-08-29 17:35:39.862481 | orchestrator | Friday 29 August 2025 17:35:26 +0000 (0:00:23.969) 0:02:45.967 ********* 2025-08-29 17:35:39.862488 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-08-29 17:35:39.862496 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-08-29 17:35:39.862503 | orchestrator | 2025-08-29 17:35:39.862516 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-08-29 17:35:39.862528 | orchestrator | Friday 29 August 2025 17:35:32 +0000 (0:00:05.522) 0:02:51.489 ********* 2025-08-29 17:35:39.862549 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:35:39.862561 | orchestrator | 2025-08-29 17:35:39.862573 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-08-29 17:35:39.862586 | orchestrator | Friday 29 August 2025 17:35:32 +0000 (0:00:00.155) 0:02:51.645 ********* 2025-08-29 17:35:39.862598 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:35:39.862611 | orchestrator | 2025-08-29 17:35:39.862623 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-08-29 17:35:39.862635 | orchestrator | Friday 29 August 2025 17:35:32 +0000 (0:00:00.162) 0:02:51.808 ********* 2025-08-29 17:35:39.862643 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:35:39.862650 | orchestrator | 2025-08-29 17:35:39.862657 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-08-29 17:35:39.862664 | orchestrator | Friday 29 August 2025 17:35:32 +0000 (0:00:00.147) 0:02:51.956 ********* 2025-08-29 17:35:39.862672 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:35:39.862679 | orchestrator | 2025-08-29 17:35:39.862686 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-08-29 17:35:39.862694 | orchestrator | Friday 29 August 2025 17:35:33 +0000 (0:00:00.675) 0:02:52.631 ********* 2025-08-29 17:35:39.862701 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:39.862708 | orchestrator | 2025-08-29 17:35:39.862716 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 17:35:39.862723 | orchestrator | Friday 29 August 2025 17:35:36 +0000 (0:00:02.996) 0:02:55.627 ********* 2025-08-29 17:35:39.862730 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:35:39.862738 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:35:39.862745 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:35:39.862752 | orchestrator | 2025-08-29 17:35:39.862759 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:35:39.862768 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-08-29 17:35:39.862777 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-08-29 17:35:39.862785 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-08-29 17:35:39.862792 | orchestrator | 2025-08-29 17:35:39.862799 | orchestrator | 2025-08-29 17:35:39.862806 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:35:39.862814 | orchestrator | Friday 29 August 2025 17:35:37 +0000 (0:00:00.848) 0:02:56.476 ********* 2025-08-29 17:35:39.862821 | orchestrator | =============================================================================== 2025-08-29 17:35:39.862833 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 29.00s 2025-08-29 17:35:39.862840 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.97s 2025-08-29 17:35:39.862847 | orchestrator | keystone : Restart keystone container ---------------------------------- 13.35s 2025-08-29 17:35:39.862854 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 13.14s 2025-08-29 17:35:39.862862 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.63s 2025-08-29 17:35:39.862869 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.40s 2025-08-29 17:35:39.862876 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 8.66s 2025-08-29 17:35:39.862883 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.62s 2025-08-29 17:35:39.862890 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.89s 2025-08-29 17:35:39.862897 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 5.52s 2025-08-29 17:35:39.862904 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.39s 2025-08-29 17:35:39.862911 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.39s 2025-08-29 17:35:39.862924 | orchestrator | keystone : Creating default user role ----------------------------------- 3.00s 2025-08-29 17:35:39.862931 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.62s 2025-08-29 17:35:39.862939 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.60s 2025-08-29 17:35:39.862946 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.18s 2025-08-29 17:35:39.862953 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.06s 2025-08-29 17:35:39.862960 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.06s 2025-08-29 17:35:39.862967 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 1.95s 2025-08-29 17:35:39.862974 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.78s 2025-08-29 17:35:39.862982 | orchestrator | 2025-08-29 17:35:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:42.913481 | orchestrator | 2025-08-29 17:35:42 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:42.914107 | orchestrator | 2025-08-29 17:35:42 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:42.916637 | orchestrator | 2025-08-29 17:35:42 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:35:42.918229 | orchestrator | 2025-08-29 17:35:42 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:42.919394 | orchestrator | 2025-08-29 17:35:42 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:42.919618 | orchestrator | 2025-08-29 17:35:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:45.965409 | orchestrator | 2025-08-29 17:35:45 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:45.966917 | orchestrator | 2025-08-29 17:35:45 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:45.968143 | orchestrator | 2025-08-29 17:35:45 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:35:45.969739 | orchestrator | 2025-08-29 17:35:45 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:45.970904 | orchestrator | 2025-08-29 17:35:45 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:45.971096 | orchestrator | 2025-08-29 17:35:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:49.016194 | orchestrator | 2025-08-29 17:35:49 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:49.019564 | orchestrator | 2025-08-29 17:35:49 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state STARTED 2025-08-29 17:35:49.021348 | orchestrator | 2025-08-29 17:35:49 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:35:49.022756 | orchestrator | 2025-08-29 17:35:49 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:49.026425 | orchestrator | 2025-08-29 17:35:49 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:49.026485 | orchestrator | 2025-08-29 17:35:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:52.063757 | orchestrator | 2025-08-29 17:35:52 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:52.064202 | orchestrator | 2025-08-29 17:35:52 | INFO  | Task dedc35ab-1ed9-487a-8e75-5453176df643 is in state SUCCESS 2025-08-29 17:35:52.064241 | orchestrator | 2025-08-29 17:35:52 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:35:52.064356 | orchestrator | 2025-08-29 17:35:52 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:52.064421 | orchestrator | 2025-08-29 17:35:52 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state STARTED 2025-08-29 17:35:52.064433 | orchestrator | 2025-08-29 17:35:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:55.103464 | orchestrator | 2025-08-29 17:35:55 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:55.103577 | orchestrator | 2025-08-29 17:35:55 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:35:55.103604 | orchestrator | 2025-08-29 17:35:55 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:55.104296 | orchestrator | 2025-08-29 17:35:55 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:35:55.106171 | orchestrator | 2025-08-29 17:35:55.106208 | orchestrator | 2025-08-29 17:35:55.106219 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-08-29 17:35:55.106230 | orchestrator | 2025-08-29 17:35:55.106240 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-08-29 17:35:55.106250 | orchestrator | Friday 29 August 2025 17:34:34 +0000 (0:00:00.323) 0:00:00.323 ********* 2025-08-29 17:35:55.106261 | orchestrator | changed: [testbed-manager] 2025-08-29 17:35:55.106273 | orchestrator | 2025-08-29 17:35:55.106283 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-08-29 17:35:55.106292 | orchestrator | Friday 29 August 2025 17:34:36 +0000 (0:00:02.372) 0:00:02.695 ********* 2025-08-29 17:35:55.106302 | orchestrator | changed: [testbed-manager] 2025-08-29 17:35:55.106312 | orchestrator | 2025-08-29 17:35:55.106321 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-08-29 17:35:55.106331 | orchestrator | Friday 29 August 2025 17:34:37 +0000 (0:00:01.211) 0:00:03.907 ********* 2025-08-29 17:35:55.106340 | orchestrator | changed: [testbed-manager] 2025-08-29 17:35:55.106350 | orchestrator | 2025-08-29 17:35:55.106359 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-08-29 17:35:55.106399 | orchestrator | Friday 29 August 2025 17:34:38 +0000 (0:00:01.242) 0:00:05.150 ********* 2025-08-29 17:35:55.106412 | orchestrator | changed: [testbed-manager] 2025-08-29 17:35:55.106421 | orchestrator | 2025-08-29 17:35:55.106431 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-08-29 17:35:55.106440 | orchestrator | Friday 29 August 2025 17:34:40 +0000 (0:00:01.645) 0:00:06.795 ********* 2025-08-29 17:35:55.106450 | orchestrator | changed: [testbed-manager] 2025-08-29 17:35:55.106483 | orchestrator | 2025-08-29 17:35:55.106494 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-08-29 17:35:55.106503 | orchestrator | Friday 29 August 2025 17:34:41 +0000 (0:00:01.199) 0:00:07.994 ********* 2025-08-29 17:35:55.106513 | orchestrator | changed: [testbed-manager] 2025-08-29 17:35:55.106522 | orchestrator | 2025-08-29 17:35:55.106531 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-08-29 17:35:55.106541 | orchestrator | Friday 29 August 2025 17:34:42 +0000 (0:00:01.083) 0:00:09.078 ********* 2025-08-29 17:35:55.106550 | orchestrator | changed: [testbed-manager] 2025-08-29 17:35:55.106559 | orchestrator | 2025-08-29 17:35:55.106569 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-08-29 17:35:55.106578 | orchestrator | Friday 29 August 2025 17:34:44 +0000 (0:00:02.136) 0:00:11.214 ********* 2025-08-29 17:35:55.106588 | orchestrator | changed: [testbed-manager] 2025-08-29 17:35:55.106597 | orchestrator | 2025-08-29 17:35:55.106607 | orchestrator | TASK [Create admin user] ******************************************************* 2025-08-29 17:35:55.106617 | orchestrator | Friday 29 August 2025 17:34:46 +0000 (0:00:01.178) 0:00:12.392 ********* 2025-08-29 17:35:55.106626 | orchestrator | changed: [testbed-manager] 2025-08-29 17:35:55.106661 | orchestrator | 2025-08-29 17:35:55.106671 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-08-29 17:35:55.106681 | orchestrator | Friday 29 August 2025 17:35:24 +0000 (0:00:38.744) 0:00:51.137 ********* 2025-08-29 17:35:55.106690 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:35:55.106699 | orchestrator | 2025-08-29 17:35:55.106709 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 17:35:55.106718 | orchestrator | 2025-08-29 17:35:55.106728 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 17:35:55.106737 | orchestrator | Friday 29 August 2025 17:35:25 +0000 (0:00:00.187) 0:00:51.325 ********* 2025-08-29 17:35:55.106747 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:35:55.106756 | orchestrator | 2025-08-29 17:35:55.106767 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 17:35:55.106778 | orchestrator | 2025-08-29 17:35:55.106788 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 17:35:55.106799 | orchestrator | Friday 29 August 2025 17:35:36 +0000 (0:00:11.726) 0:01:03.051 ********* 2025-08-29 17:35:55.106809 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:35:55.106820 | orchestrator | 2025-08-29 17:35:55.106831 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 17:35:55.106841 | orchestrator | 2025-08-29 17:35:55.106853 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 17:35:55.106863 | orchestrator | Friday 29 August 2025 17:35:48 +0000 (0:00:11.517) 0:01:14.569 ********* 2025-08-29 17:35:55.106874 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:35:55.106884 | orchestrator | 2025-08-29 17:35:55.106894 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:35:55.106907 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:35:55.106935 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:35:55.106947 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:35:55.106958 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:35:55.106975 | orchestrator | 2025-08-29 17:35:55.106992 | orchestrator | 2025-08-29 17:35:55.107007 | orchestrator | 2025-08-29 17:35:55.107023 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:35:55.107038 | orchestrator | Friday 29 August 2025 17:35:49 +0000 (0:00:01.176) 0:01:15.745 ********* 2025-08-29 17:35:55.107056 | orchestrator | =============================================================================== 2025-08-29 17:35:55.107068 | orchestrator | Create admin user ------------------------------------------------------ 38.74s 2025-08-29 17:35:55.107080 | orchestrator | Restart ceph manager service ------------------------------------------- 24.42s 2025-08-29 17:35:55.107104 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.37s 2025-08-29 17:35:55.107114 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.14s 2025-08-29 17:35:55.107124 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.65s 2025-08-29 17:35:55.107133 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.24s 2025-08-29 17:35:55.107143 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.21s 2025-08-29 17:35:55.107152 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.20s 2025-08-29 17:35:55.107162 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.18s 2025-08-29 17:35:55.107171 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.08s 2025-08-29 17:35:55.107181 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.19s 2025-08-29 17:35:55.107199 | orchestrator | 2025-08-29 17:35:55.107208 | orchestrator | 2025-08-29 17:35:55.107218 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:35:55.107227 | orchestrator | 2025-08-29 17:35:55.107237 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:35:55.107246 | orchestrator | Friday 29 August 2025 17:35:09 +0000 (0:00:00.911) 0:00:00.911 ********* 2025-08-29 17:35:55.107256 | orchestrator | ok: [testbed-manager] 2025-08-29 17:35:55.107266 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:35:55.107276 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:35:55.107285 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:35:55.107294 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:55.107304 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:35:55.107313 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:35:55.107322 | orchestrator | 2025-08-29 17:35:55.107332 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:35:55.107342 | orchestrator | Friday 29 August 2025 17:35:12 +0000 (0:00:02.580) 0:00:03.491 ********* 2025-08-29 17:35:55.107351 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-08-29 17:35:55.107361 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-08-29 17:35:55.107399 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-08-29 17:35:55.107409 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-08-29 17:35:55.107419 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-08-29 17:35:55.107429 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-08-29 17:35:55.107438 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-08-29 17:35:55.107448 | orchestrator | 2025-08-29 17:35:55.107458 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-08-29 17:35:55.107467 | orchestrator | 2025-08-29 17:35:55.107477 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-08-29 17:35:55.107487 | orchestrator | Friday 29 August 2025 17:35:14 +0000 (0:00:01.788) 0:00:05.280 ********* 2025-08-29 17:35:55.107497 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:35:55.107508 | orchestrator | 2025-08-29 17:35:55.107518 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-08-29 17:35:55.107528 | orchestrator | Friday 29 August 2025 17:35:18 +0000 (0:00:03.914) 0:00:09.194 ********* 2025-08-29 17:35:55.107537 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-08-29 17:35:55.107547 | orchestrator | 2025-08-29 17:35:55.107557 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-08-29 17:35:55.107567 | orchestrator | Friday 29 August 2025 17:35:23 +0000 (0:00:05.600) 0:00:14.795 ********* 2025-08-29 17:35:55.107577 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-08-29 17:35:55.107588 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-08-29 17:35:55.107597 | orchestrator | 2025-08-29 17:35:55.107607 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-08-29 17:35:55.107617 | orchestrator | Friday 29 August 2025 17:35:31 +0000 (0:00:07.626) 0:00:22.422 ********* 2025-08-29 17:35:55.107626 | orchestrator | ok: [testbed-manager] => (item=service) 2025-08-29 17:35:55.107636 | orchestrator | 2025-08-29 17:35:55.107646 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-08-29 17:35:55.107655 | orchestrator | Friday 29 August 2025 17:35:34 +0000 (0:00:03.588) 0:00:26.010 ********* 2025-08-29 17:35:55.107665 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 17:35:55.107681 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-08-29 17:35:55.107697 | orchestrator | 2025-08-29 17:35:55.107707 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-08-29 17:35:55.107717 | orchestrator | Friday 29 August 2025 17:35:39 +0000 (0:00:04.083) 0:00:30.093 ********* 2025-08-29 17:35:55.107726 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-08-29 17:35:55.107736 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-08-29 17:35:55.107746 | orchestrator | 2025-08-29 17:35:55.107756 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-08-29 17:35:55.107770 | orchestrator | Friday 29 August 2025 17:35:45 +0000 (0:00:06.954) 0:00:37.048 ********* 2025-08-29 17:35:55.107786 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-08-29 17:35:55.107814 | orchestrator | 2025-08-29 17:35:55.107830 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:35:55.107845 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:35:55.107871 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:35:55.107888 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:35:55.107905 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:35:55.107922 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:35:55.107938 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:35:55.107955 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:35:55.107971 | orchestrator | 2025-08-29 17:35:55.107993 | orchestrator | 2025-08-29 17:35:55.108013 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:35:55.108029 | orchestrator | Friday 29 August 2025 17:35:52 +0000 (0:00:06.084) 0:00:43.132 ********* 2025-08-29 17:35:55.108046 | orchestrator | =============================================================================== 2025-08-29 17:35:55.108062 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.63s 2025-08-29 17:35:55.108079 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.95s 2025-08-29 17:35:55.108091 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.08s 2025-08-29 17:35:55.108100 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 5.60s 2025-08-29 17:35:55.108110 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.08s 2025-08-29 17:35:55.108119 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 3.91s 2025-08-29 17:35:55.108129 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.59s 2025-08-29 17:35:55.108145 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.58s 2025-08-29 17:35:55.108169 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.79s 2025-08-29 17:35:55.108187 | orchestrator | 2025-08-29 17:35:55 | INFO  | Task 8b202c8d-b47f-43c2-a0b0-9bfc526fd344 is in state SUCCESS 2025-08-29 17:35:55.108203 | orchestrator | 2025-08-29 17:35:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:35:58.145426 | orchestrator | 2025-08-29 17:35:58 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:35:58.146802 | orchestrator | 2025-08-29 17:35:58 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:35:58.147249 | orchestrator | 2025-08-29 17:35:58 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:35:58.148419 | orchestrator | 2025-08-29 17:35:58 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:35:58.148472 | orchestrator | 2025-08-29 17:35:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:01.192450 | orchestrator | 2025-08-29 17:36:01 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:01.192942 | orchestrator | 2025-08-29 17:36:01 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:01.193620 | orchestrator | 2025-08-29 17:36:01 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:01.194629 | orchestrator | 2025-08-29 17:36:01 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:01.194651 | orchestrator | 2025-08-29 17:36:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:04.254576 | orchestrator | 2025-08-29 17:36:04 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:04.254729 | orchestrator | 2025-08-29 17:36:04 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:04.254749 | orchestrator | 2025-08-29 17:36:04 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:04.254989 | orchestrator | 2025-08-29 17:36:04 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:04.255019 | orchestrator | 2025-08-29 17:36:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:07.296569 | orchestrator | 2025-08-29 17:36:07 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:07.298646 | orchestrator | 2025-08-29 17:36:07 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:07.300569 | orchestrator | 2025-08-29 17:36:07 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:07.303093 | orchestrator | 2025-08-29 17:36:07 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:07.303204 | orchestrator | 2025-08-29 17:36:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:10.344677 | orchestrator | 2025-08-29 17:36:10 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:10.344810 | orchestrator | 2025-08-29 17:36:10 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:10.344837 | orchestrator | 2025-08-29 17:36:10 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:10.344855 | orchestrator | 2025-08-29 17:36:10 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:10.344875 | orchestrator | 2025-08-29 17:36:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:13.385592 | orchestrator | 2025-08-29 17:36:13 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:13.386543 | orchestrator | 2025-08-29 17:36:13 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:13.387468 | orchestrator | 2025-08-29 17:36:13 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:13.389291 | orchestrator | 2025-08-29 17:36:13 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:13.389315 | orchestrator | 2025-08-29 17:36:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:16.441598 | orchestrator | 2025-08-29 17:36:16 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:16.441738 | orchestrator | 2025-08-29 17:36:16 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:16.442102 | orchestrator | 2025-08-29 17:36:16 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:16.442904 | orchestrator | 2025-08-29 17:36:16 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:16.442928 | orchestrator | 2025-08-29 17:36:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:19.481353 | orchestrator | 2025-08-29 17:36:19 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:19.481619 | orchestrator | 2025-08-29 17:36:19 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:19.482237 | orchestrator | 2025-08-29 17:36:19 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:19.484869 | orchestrator | 2025-08-29 17:36:19 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:19.484892 | orchestrator | 2025-08-29 17:36:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:22.523838 | orchestrator | 2025-08-29 17:36:22 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:22.523953 | orchestrator | 2025-08-29 17:36:22 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:22.524625 | orchestrator | 2025-08-29 17:36:22 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:22.525980 | orchestrator | 2025-08-29 17:36:22 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:22.526076 | orchestrator | 2025-08-29 17:36:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:25.566540 | orchestrator | 2025-08-29 17:36:25 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:25.567941 | orchestrator | 2025-08-29 17:36:25 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:25.568843 | orchestrator | 2025-08-29 17:36:25 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:25.569624 | orchestrator | 2025-08-29 17:36:25 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:25.569646 | orchestrator | 2025-08-29 17:36:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:28.619198 | orchestrator | 2025-08-29 17:36:28 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:28.619622 | orchestrator | 2025-08-29 17:36:28 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:28.620608 | orchestrator | 2025-08-29 17:36:28 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:28.621273 | orchestrator | 2025-08-29 17:36:28 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:28.621301 | orchestrator | 2025-08-29 17:36:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:31.652567 | orchestrator | 2025-08-29 17:36:31 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:31.654946 | orchestrator | 2025-08-29 17:36:31 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:31.658805 | orchestrator | 2025-08-29 17:36:31 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:31.661025 | orchestrator | 2025-08-29 17:36:31 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:31.661079 | orchestrator | 2025-08-29 17:36:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:34.711108 | orchestrator | 2025-08-29 17:36:34 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:34.712154 | orchestrator | 2025-08-29 17:36:34 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:34.713080 | orchestrator | 2025-08-29 17:36:34 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:34.714434 | orchestrator | 2025-08-29 17:36:34 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:34.714463 | orchestrator | 2025-08-29 17:36:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:37.768750 | orchestrator | 2025-08-29 17:36:37 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:37.772417 | orchestrator | 2025-08-29 17:36:37 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:37.774564 | orchestrator | 2025-08-29 17:36:37 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:37.775694 | orchestrator | 2025-08-29 17:36:37 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:37.775847 | orchestrator | 2025-08-29 17:36:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:40.835738 | orchestrator | 2025-08-29 17:36:40 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:40.840062 | orchestrator | 2025-08-29 17:36:40 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:40.842219 | orchestrator | 2025-08-29 17:36:40 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:40.845856 | orchestrator | 2025-08-29 17:36:40 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:40.846163 | orchestrator | 2025-08-29 17:36:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:43.883076 | orchestrator | 2025-08-29 17:36:43 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:43.884778 | orchestrator | 2025-08-29 17:36:43 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:43.886105 | orchestrator | 2025-08-29 17:36:43 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:43.886854 | orchestrator | 2025-08-29 17:36:43 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:43.888331 | orchestrator | 2025-08-29 17:36:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:46.952727 | orchestrator | 2025-08-29 17:36:46 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:46.954763 | orchestrator | 2025-08-29 17:36:46 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:46.957219 | orchestrator | 2025-08-29 17:36:46 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:46.958237 | orchestrator | 2025-08-29 17:36:46 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:46.958300 | orchestrator | 2025-08-29 17:36:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:50.045127 | orchestrator | 2025-08-29 17:36:50 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:50.045210 | orchestrator | 2025-08-29 17:36:50 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:50.046052 | orchestrator | 2025-08-29 17:36:50 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:50.046849 | orchestrator | 2025-08-29 17:36:50 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:50.047029 | orchestrator | 2025-08-29 17:36:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:53.091477 | orchestrator | 2025-08-29 17:36:53 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:53.092318 | orchestrator | 2025-08-29 17:36:53 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:53.093273 | orchestrator | 2025-08-29 17:36:53 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:53.094498 | orchestrator | 2025-08-29 17:36:53 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:53.095084 | orchestrator | 2025-08-29 17:36:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:56.138510 | orchestrator | 2025-08-29 17:36:56 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:56.138861 | orchestrator | 2025-08-29 17:36:56 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:56.139666 | orchestrator | 2025-08-29 17:36:56 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:56.141019 | orchestrator | 2025-08-29 17:36:56 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:56.141116 | orchestrator | 2025-08-29 17:36:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:36:59.188008 | orchestrator | 2025-08-29 17:36:59 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:36:59.209230 | orchestrator | 2025-08-29 17:36:59 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:36:59.210758 | orchestrator | 2025-08-29 17:36:59 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:36:59.216094 | orchestrator | 2025-08-29 17:36:59 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:36:59.216281 | orchestrator | 2025-08-29 17:36:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:02.261549 | orchestrator | 2025-08-29 17:37:02 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:02.261668 | orchestrator | 2025-08-29 17:37:02 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:02.261685 | orchestrator | 2025-08-29 17:37:02 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:02.261711 | orchestrator | 2025-08-29 17:37:02 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:02.261723 | orchestrator | 2025-08-29 17:37:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:05.302646 | orchestrator | 2025-08-29 17:37:05 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:05.304880 | orchestrator | 2025-08-29 17:37:05 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:05.305423 | orchestrator | 2025-08-29 17:37:05 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:05.306408 | orchestrator | 2025-08-29 17:37:05 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:05.306451 | orchestrator | 2025-08-29 17:37:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:08.337954 | orchestrator | 2025-08-29 17:37:08 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:08.338741 | orchestrator | 2025-08-29 17:37:08 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:08.340206 | orchestrator | 2025-08-29 17:37:08 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:08.342174 | orchestrator | 2025-08-29 17:37:08 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:08.342217 | orchestrator | 2025-08-29 17:37:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:11.375201 | orchestrator | 2025-08-29 17:37:11 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:11.376207 | orchestrator | 2025-08-29 17:37:11 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:11.377607 | orchestrator | 2025-08-29 17:37:11 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:11.378296 | orchestrator | 2025-08-29 17:37:11 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:11.378342 | orchestrator | 2025-08-29 17:37:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:14.409157 | orchestrator | 2025-08-29 17:37:14 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:14.411423 | orchestrator | 2025-08-29 17:37:14 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:14.411474 | orchestrator | 2025-08-29 17:37:14 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:14.411487 | orchestrator | 2025-08-29 17:37:14 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:14.411498 | orchestrator | 2025-08-29 17:37:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:17.444296 | orchestrator | 2025-08-29 17:37:17 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:17.444880 | orchestrator | 2025-08-29 17:37:17 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:17.446951 | orchestrator | 2025-08-29 17:37:17 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:17.447821 | orchestrator | 2025-08-29 17:37:17 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:17.447850 | orchestrator | 2025-08-29 17:37:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:20.473460 | orchestrator | 2025-08-29 17:37:20 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:20.473597 | orchestrator | 2025-08-29 17:37:20 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:20.477167 | orchestrator | 2025-08-29 17:37:20 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:20.477866 | orchestrator | 2025-08-29 17:37:20 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:20.478254 | orchestrator | 2025-08-29 17:37:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:23.510980 | orchestrator | 2025-08-29 17:37:23 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:23.511594 | orchestrator | 2025-08-29 17:37:23 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:23.513019 | orchestrator | 2025-08-29 17:37:23 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:23.514338 | orchestrator | 2025-08-29 17:37:23 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:23.514490 | orchestrator | 2025-08-29 17:37:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:26.557834 | orchestrator | 2025-08-29 17:37:26 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:26.559558 | orchestrator | 2025-08-29 17:37:26 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:26.561761 | orchestrator | 2025-08-29 17:37:26 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:26.563255 | orchestrator | 2025-08-29 17:37:26 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:26.563285 | orchestrator | 2025-08-29 17:37:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:29.604187 | orchestrator | 2025-08-29 17:37:29 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:29.604317 | orchestrator | 2025-08-29 17:37:29 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:29.604774 | orchestrator | 2025-08-29 17:37:29 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:29.605502 | orchestrator | 2025-08-29 17:37:29 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:29.605526 | orchestrator | 2025-08-29 17:37:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:32.643970 | orchestrator | 2025-08-29 17:37:32 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:32.644082 | orchestrator | 2025-08-29 17:37:32 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:32.644912 | orchestrator | 2025-08-29 17:37:32 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:32.645459 | orchestrator | 2025-08-29 17:37:32 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:32.645487 | orchestrator | 2025-08-29 17:37:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:35.695750 | orchestrator | 2025-08-29 17:37:35 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:35.696468 | orchestrator | 2025-08-29 17:37:35 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:35.699910 | orchestrator | 2025-08-29 17:37:35 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:35.703490 | orchestrator | 2025-08-29 17:37:35 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:35.703540 | orchestrator | 2025-08-29 17:37:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:38.750897 | orchestrator | 2025-08-29 17:37:38 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:38.751020 | orchestrator | 2025-08-29 17:37:38 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:38.752026 | orchestrator | 2025-08-29 17:37:38 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:38.752925 | orchestrator | 2025-08-29 17:37:38 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:38.753048 | orchestrator | 2025-08-29 17:37:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:41.797801 | orchestrator | 2025-08-29 17:37:41 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:41.798442 | orchestrator | 2025-08-29 17:37:41 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:41.799069 | orchestrator | 2025-08-29 17:37:41 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:41.800668 | orchestrator | 2025-08-29 17:37:41 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:41.800715 | orchestrator | 2025-08-29 17:37:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:44.838289 | orchestrator | 2025-08-29 17:37:44 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:44.838801 | orchestrator | 2025-08-29 17:37:44 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:44.839776 | orchestrator | 2025-08-29 17:37:44 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:44.841111 | orchestrator | 2025-08-29 17:37:44 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:44.841135 | orchestrator | 2025-08-29 17:37:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:47.883086 | orchestrator | 2025-08-29 17:37:47 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:47.883198 | orchestrator | 2025-08-29 17:37:47 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:47.883911 | orchestrator | 2025-08-29 17:37:47 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:47.884636 | orchestrator | 2025-08-29 17:37:47 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:47.884748 | orchestrator | 2025-08-29 17:37:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:50.929449 | orchestrator | 2025-08-29 17:37:50 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:50.930995 | orchestrator | 2025-08-29 17:37:50 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:50.932674 | orchestrator | 2025-08-29 17:37:50 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:50.934646 | orchestrator | 2025-08-29 17:37:50 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:50.934669 | orchestrator | 2025-08-29 17:37:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:53.982251 | orchestrator | 2025-08-29 17:37:53 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:53.986182 | orchestrator | 2025-08-29 17:37:53 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:53.986791 | orchestrator | 2025-08-29 17:37:53 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:53.988325 | orchestrator | 2025-08-29 17:37:53 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:53.988764 | orchestrator | 2025-08-29 17:37:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:37:57.031419 | orchestrator | 2025-08-29 17:37:57 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:37:57.032289 | orchestrator | 2025-08-29 17:37:57 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:37:57.033343 | orchestrator | 2025-08-29 17:37:57 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:37:57.034621 | orchestrator | 2025-08-29 17:37:57 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:37:57.034653 | orchestrator | 2025-08-29 17:37:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:00.075843 | orchestrator | 2025-08-29 17:38:00 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:38:00.075975 | orchestrator | 2025-08-29 17:38:00 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:00.075991 | orchestrator | 2025-08-29 17:38:00 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:00.076496 | orchestrator | 2025-08-29 17:38:00 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:00.076521 | orchestrator | 2025-08-29 17:38:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:03.185616 | orchestrator | 2025-08-29 17:38:03 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:38:03.185987 | orchestrator | 2025-08-29 17:38:03 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:03.187091 | orchestrator | 2025-08-29 17:38:03 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:03.188211 | orchestrator | 2025-08-29 17:38:03 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:03.188247 | orchestrator | 2025-08-29 17:38:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:06.239219 | orchestrator | 2025-08-29 17:38:06 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:38:06.239778 | orchestrator | 2025-08-29 17:38:06 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:06.240346 | orchestrator | 2025-08-29 17:38:06 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:06.242311 | orchestrator | 2025-08-29 17:38:06 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:06.242356 | orchestrator | 2025-08-29 17:38:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:09.290472 | orchestrator | 2025-08-29 17:38:09 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:38:09.290705 | orchestrator | 2025-08-29 17:38:09 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:09.296557 | orchestrator | 2025-08-29 17:38:09 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:09.296677 | orchestrator | 2025-08-29 17:38:09 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:09.296694 | orchestrator | 2025-08-29 17:38:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:12.348801 | orchestrator | 2025-08-29 17:38:12 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:38:12.348912 | orchestrator | 2025-08-29 17:38:12 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:12.349645 | orchestrator | 2025-08-29 17:38:12 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:12.350229 | orchestrator | 2025-08-29 17:38:12 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:12.350273 | orchestrator | 2025-08-29 17:38:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:15.382298 | orchestrator | 2025-08-29 17:38:15 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:38:15.384683 | orchestrator | 2025-08-29 17:38:15 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:15.384741 | orchestrator | 2025-08-29 17:38:15 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:15.384761 | orchestrator | 2025-08-29 17:38:15 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:15.384781 | orchestrator | 2025-08-29 17:38:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:18.420090 | orchestrator | 2025-08-29 17:38:18 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:38:18.420608 | orchestrator | 2025-08-29 17:38:18 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:18.421447 | orchestrator | 2025-08-29 17:38:18 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:18.422296 | orchestrator | 2025-08-29 17:38:18 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:18.422328 | orchestrator | 2025-08-29 17:38:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:21.476952 | orchestrator | 2025-08-29 17:38:21 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state STARTED 2025-08-29 17:38:21.477599 | orchestrator | 2025-08-29 17:38:21 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:21.478591 | orchestrator | 2025-08-29 17:38:21 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:21.480824 | orchestrator | 2025-08-29 17:38:21 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:21.480870 | orchestrator | 2025-08-29 17:38:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:24.531270 | orchestrator | 2025-08-29 17:38:24 | INFO  | Task ed83d2a8-80f4-4b75-988c-e0f6061a5f7c is in state SUCCESS 2025-08-29 17:38:24.532965 | orchestrator | 2025-08-29 17:38:24.533159 | orchestrator | 2025-08-29 17:38:24.533184 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:38:24.533202 | orchestrator | 2025-08-29 17:38:24.533316 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:38:24.533414 | orchestrator | Friday 29 August 2025 17:34:33 +0000 (0:00:00.332) 0:00:00.332 ********* 2025-08-29 17:38:24.533429 | orchestrator | ok: [testbed-manager] 2025-08-29 17:38:24.533441 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:38:24.533453 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:38:24.533464 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:38:24.533474 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:38:24.533485 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:38:24.533496 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:38:24.533536 | orchestrator | 2025-08-29 17:38:24.533550 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:38:24.533561 | orchestrator | Friday 29 August 2025 17:34:34 +0000 (0:00:01.175) 0:00:01.507 ********* 2025-08-29 17:38:24.533600 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-08-29 17:38:24.533612 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-08-29 17:38:24.533623 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-08-29 17:38:24.533642 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-08-29 17:38:24.533662 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-08-29 17:38:24.533680 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-08-29 17:38:24.533698 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-08-29 17:38:24.533716 | orchestrator | 2025-08-29 17:38:24.533736 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-08-29 17:38:24.533755 | orchestrator | 2025-08-29 17:38:24.533766 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-08-29 17:38:24.533777 | orchestrator | Friday 29 August 2025 17:34:35 +0000 (0:00:00.818) 0:00:02.326 ********* 2025-08-29 17:38:24.533789 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:38:24.533802 | orchestrator | 2025-08-29 17:38:24.533813 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-08-29 17:38:24.533936 | orchestrator | Friday 29 August 2025 17:34:37 +0000 (0:00:01.971) 0:00:04.297 ********* 2025-08-29 17:38:24.533953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.534181 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 17:38:24.534209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.534232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.534264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.534277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.534289 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.534362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.534410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.534469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.534482 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.534547 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.534743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.534762 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.534774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.534797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.534818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.534830 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.534841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.534861 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 17:38:24.534876 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.534894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.534906 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.534922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.534933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.534945 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.534956 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.534974 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.534988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.535017 | orchestrator | 2025-08-29 17:38:24.535037 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-08-29 17:38:24.535055 | orchestrator | Friday 29 August 2025 17:34:41 +0000 (0:00:04.034) 0:00:08.331 ********* 2025-08-29 17:38:24.535075 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:38:24.535092 | orchestrator | 2025-08-29 17:38:24.535103 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-08-29 17:38:24.535114 | orchestrator | Friday 29 August 2025 17:34:43 +0000 (0:00:01.578) 0:00:09.909 ********* 2025-08-29 17:38:24.535131 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 17:38:24.535144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.535155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.535167 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.535201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.535213 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.535232 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.535244 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.535255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.535271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.535283 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.535295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.535325 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.535344 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.535356 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.535368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.535463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.535527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.535560 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.535572 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.535593 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.535625 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 17:38:24.535648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.535697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.535714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.535726 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.535737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.535764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.535776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.535787 | orchestrator | 2025-08-29 17:38:24.535798 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-08-29 17:38:24.535809 | orchestrator | Friday 29 August 2025 17:34:48 +0000 (0:00:05.266) 0:00:15.176 ********* 2025-08-29 17:38:24.535820 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 17:38:24.535844 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:38:24.535861 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.535877 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 17:38:24.535925 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.535950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:38:24.535966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:38:24.535983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536151 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:38:24.536170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:38:24.536198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536262 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:24.536273 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:24.536284 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:24.536303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:38:24.536315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536337 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:38:24.536348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:38:24.536366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536421 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:38:24.536432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:38:24.536444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536476 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:38:24.536487 | orchestrator | 2025-08-29 17:38:24.536498 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-08-29 17:38:24.536509 | orchestrator | Friday 29 August 2025 17:34:49 +0000 (0:00:01.356) 0:00:16.532 ********* 2025-08-29 17:38:24.536521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:38:24.536533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536592 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 17:38:24.536610 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:38:24.536622 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536634 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 17:38:24.536651 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536674 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:24.536686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:38:24.536697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536750 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:38:24.536761 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:24.536772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:38:24.536784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:38:24.536845 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:24.536862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:38:24.536874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536896 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:38:24.536907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:38:24.536930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536942 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536953 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:38:24.536964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:38:24.536975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.536993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:38:24.537004 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:38:24.537015 | orchestrator | 2025-08-29 17:38:24.537026 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-08-29 17:38:24.537037 | orchestrator | Friday 29 August 2025 17:34:52 +0000 (0:00:02.408) 0:00:18.941 ********* 2025-08-29 17:38:24.537048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.537060 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 17:38:24.537082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.537094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.537105 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.537117 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.537133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.537146 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.537157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.537174 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.537191 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.537217 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.537237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.537257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.537281 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.537293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.537304 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.537322 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.537339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.537351 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.537362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.537373 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.537452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.537467 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 17:38:24.537491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.537509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.537521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.537531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.537541 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.537552 | orchestrator | 2025-08-29 17:38:24.537561 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-08-29 17:38:24.537571 | orchestrator | Friday 29 August 2025 17:34:58 +0000 (0:00:05.820) 0:00:24.762 ********* 2025-08-29 17:38:24.537581 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:38:24.537591 | orchestrator | 2025-08-29 17:38:24.537601 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-08-29 17:38:24.537616 | orchestrator | Friday 29 August 2025 17:34:59 +0000 (0:00:01.283) 0:00:26.045 ********* 2025-08-29 17:38:24.537626 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087077, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.97824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.537644 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087077, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.97824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.537655 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087077, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.97824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.537669 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087077, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.97824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.537680 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087077, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.97824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.537690 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087077, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.97824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.537706 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087110, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9872236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.537716 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087110, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9872236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.537735 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087077, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.97824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.537745 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087110, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9872236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.537760 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087110, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9872236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.537770 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087110, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9872236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.537780 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087068, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.977746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538014 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087068, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.977746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538074 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087110, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9872236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538084 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087110, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9872236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.538094 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087068, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.977746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538110 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087101, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9822853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538121 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087068, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.977746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538131 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087065, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9712782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538149 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087101, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9822853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538165 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087068, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.977746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538176 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087068, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.977746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.538186 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087101, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9822853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538201 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087068, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.977746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538211 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087080, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9790282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538221 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087101, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9822853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538232 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087101, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9822853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538252 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087065, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9712782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538263 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087097, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9819455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538273 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087065, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9712782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538287 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087065, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9712782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538297 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087101, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9822853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.538307 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087101, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9822853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538324 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087083, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9792595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538339 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087080, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9790282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538349 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087080, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9790282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538359 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087065, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9712782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538374 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087065, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9712782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538403 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087080, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9790282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538413 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087097, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9819455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538430 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087075, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.977746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538445 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087080, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9790282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538455 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087097, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9819455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538465 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087083, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9792595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538480 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087080, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9790282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538490 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087097, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9819455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538500 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087107, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9872236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538527 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087097, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9819455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538543 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087065, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9712782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.538553 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087058, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.969741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538563 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087083, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9792595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538573 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087075, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.977746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538588 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087083, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9792595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538599 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087097, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9819455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538615 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087083, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9792595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538633 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087075, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.977746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538645 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087107, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9872236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538657 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087083, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9792595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538668 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087075, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.977746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538687 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087058, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.969741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538706 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087116, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9893165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538718 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087080, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9790282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.538734 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087075, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.977746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538746 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087075, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.977746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538757 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087107, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9872236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538768 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087116, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9893165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538783 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087107, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9872236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538800 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087107, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9872236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538810 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087105, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.98596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538825 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087107, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9872236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538835 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087105, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.98596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538845 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087058, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.969741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538856 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087067, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.97396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538871 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087058, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.969741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538887 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087097, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9819455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.538897 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087058, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.969741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538913 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087058, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.969741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538923 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087067, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.97396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538933 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087116, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9893165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538943 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087060, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9712782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538963 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087116, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9893165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538973 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087060, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9712782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538983 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087116, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9893165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.538999 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087116, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9893165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539009 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087105, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.98596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539020 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087089, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.980942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539029 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087105, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.98596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539050 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087067, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.97396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539060 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087105, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.98596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539070 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087105, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.98596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539085 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087086, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9799738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539095 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087089, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.980942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539106 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087083, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9792595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.539115 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087067, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.97396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539135 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087067, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.97396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539145 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087060, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9712782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539155 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087067, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.97396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539170 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087115, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.988696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539181 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:24.539191 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087060, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9712782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539201 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087086, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9799738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539211 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087089, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.980942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539231 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087089, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.980942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539241 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087060, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9712782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539252 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087086, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9799738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539266 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087075, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.977746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.539276 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087060, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9712782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539286 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087115, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.988696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539302 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:38:24.539312 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087089, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.980942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539329 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087086, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9799738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539339 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087115, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.988696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539349 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:24.539359 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087089, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.980942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539374 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087086, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9799738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539400 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087115, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.988696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539411 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:38:24.539421 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087086, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9799738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539437 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087115, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.988696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539446 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:24.539460 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087115, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.988696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 17:38:24.539471 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:38:24.539480 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087107, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9872236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.539491 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087058, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.969741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.539506 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087116, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9893165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.539517 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087105, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.98596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.539532 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087067, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.97396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.539542 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087060, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9712782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.539556 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087089, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.980942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.539567 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087086, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9799738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.539577 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087115, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.988696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:38:24.539587 | orchestrator | 2025-08-29 17:38:24.539597 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-08-29 17:38:24.539607 | orchestrator | Friday 29 August 2025 17:35:33 +0000 (0:00:34.459) 0:01:00.504 ********* 2025-08-29 17:38:24.539617 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:38:24.539626 | orchestrator | 2025-08-29 17:38:24.539687 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-08-29 17:38:24.539699 | orchestrator | Friday 29 August 2025 17:35:34 +0000 (0:00:00.823) 0:01:01.328 ********* 2025-08-29 17:38:24.539709 | orchestrator | [WARNING]: Skipped 2025-08-29 17:38:24.539720 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 17:38:24.539730 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-08-29 17:38:24.539739 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 17:38:24.539755 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-08-29 17:38:24.539765 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:38:24.539774 | orchestrator | [WARNING]: Skipped 2025-08-29 17:38:24.539784 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 17:38:24.539794 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-08-29 17:38:24.539803 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 17:38:24.539813 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-08-29 17:38:24.539822 | orchestrator | [WARNING]: Skipped 2025-08-29 17:38:24.539832 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 17:38:24.539841 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-08-29 17:38:24.539851 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 17:38:24.539860 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-08-29 17:38:24.539869 | orchestrator | [WARNING]: Skipped 2025-08-29 17:38:24.539879 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 17:38:24.539889 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-08-29 17:38:24.539898 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 17:38:24.539907 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-08-29 17:38:24.539917 | orchestrator | [WARNING]: Skipped 2025-08-29 17:38:24.539926 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 17:38:24.539935 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-08-29 17:38:24.539945 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 17:38:24.539954 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-08-29 17:38:24.539964 | orchestrator | [WARNING]: Skipped 2025-08-29 17:38:24.539974 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 17:38:24.539983 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-08-29 17:38:24.539992 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 17:38:24.540001 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-08-29 17:38:24.540011 | orchestrator | [WARNING]: Skipped 2025-08-29 17:38:24.540020 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 17:38:24.540035 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-08-29 17:38:24.540044 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 17:38:24.540054 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-08-29 17:38:24.540063 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 17:38:24.540073 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 17:38:24.540082 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 17:38:24.540091 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 17:38:24.540101 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 17:38:24.540110 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 17:38:24.540120 | orchestrator | 2025-08-29 17:38:24.540129 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-08-29 17:38:24.540139 | orchestrator | Friday 29 August 2025 17:35:37 +0000 (0:00:02.312) 0:01:03.640 ********* 2025-08-29 17:38:24.540148 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 17:38:24.540158 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:24.540168 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 17:38:24.540178 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 17:38:24.540194 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:24.540204 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:24.540213 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 17:38:24.540223 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:38:24.540233 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 17:38:24.540242 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:38:24.540252 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 17:38:24.540261 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:38:24.540271 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-08-29 17:38:24.540280 | orchestrator | 2025-08-29 17:38:24.540290 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-08-29 17:38:24.540299 | orchestrator | Friday 29 August 2025 17:35:57 +0000 (0:00:20.837) 0:01:24.477 ********* 2025-08-29 17:38:24.540309 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 17:38:24.540318 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:24.540333 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 17:38:24.540343 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:24.540353 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 17:38:24.540362 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:38:24.540372 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 17:38:24.540396 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:38:24.540406 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 17:38:24.540416 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:38:24.540425 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 17:38:24.540435 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:24.540444 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-08-29 17:38:24.540454 | orchestrator | 2025-08-29 17:38:24.540463 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-08-29 17:38:24.540473 | orchestrator | Friday 29 August 2025 17:36:02 +0000 (0:00:05.116) 0:01:29.594 ********* 2025-08-29 17:38:24.540483 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 17:38:24.540493 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 17:38:24.540503 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 17:38:24.540512 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:24.540521 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:24.540531 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:24.540540 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-08-29 17:38:24.540550 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 17:38:24.540559 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:38:24.540569 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 17:38:24.540585 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:38:24.540595 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 17:38:24.540605 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:38:24.540614 | orchestrator | 2025-08-29 17:38:24.540628 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-08-29 17:38:24.540638 | orchestrator | Friday 29 August 2025 17:36:06 +0000 (0:00:03.241) 0:01:32.835 ********* 2025-08-29 17:38:24.540648 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:38:24.540657 | orchestrator | 2025-08-29 17:38:24.540667 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-08-29 17:38:24.540676 | orchestrator | Friday 29 August 2025 17:36:07 +0000 (0:00:00.881) 0:01:33.716 ********* 2025-08-29 17:38:24.540686 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:38:24.540695 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:24.540705 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:24.540714 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:24.540723 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:38:24.540733 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:38:24.540742 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:38:24.540752 | orchestrator | 2025-08-29 17:38:24.540761 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-08-29 17:38:24.540771 | orchestrator | Friday 29 August 2025 17:36:07 +0000 (0:00:00.828) 0:01:34.544 ********* 2025-08-29 17:38:24.540780 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:38:24.540789 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:38:24.540799 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:38:24.540808 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:38:24.540818 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:38:24.540827 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:38:24.540836 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:38:24.540845 | orchestrator | 2025-08-29 17:38:24.540855 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-08-29 17:38:24.540864 | orchestrator | Friday 29 August 2025 17:36:11 +0000 (0:00:03.612) 0:01:38.157 ********* 2025-08-29 17:38:24.540874 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 17:38:24.540883 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 17:38:24.540893 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 17:38:24.540902 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:24.540911 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:24.540921 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:38:24.540930 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 17:38:24.540939 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:24.540949 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 17:38:24.540963 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:38:24.540974 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 17:38:24.540983 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:38:24.540993 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 17:38:24.541002 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:38:24.541012 | orchestrator | 2025-08-29 17:38:24.541021 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-08-29 17:38:24.541031 | orchestrator | Friday 29 August 2025 17:36:15 +0000 (0:00:03.688) 0:01:41.846 ********* 2025-08-29 17:38:24.541040 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 17:38:24.541056 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-08-29 17:38:24.541065 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:24.541075 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 17:38:24.541084 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:24.541094 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 17:38:24.541104 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:24.541113 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 17:38:24.541123 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:38:24.541132 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 17:38:24.541142 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:38:24.541151 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 17:38:24.541160 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:38:24.541170 | orchestrator | 2025-08-29 17:38:24.541179 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-08-29 17:38:24.541189 | orchestrator | Friday 29 August 2025 17:36:18 +0000 (0:00:02.802) 0:01:44.648 ********* 2025-08-29 17:38:24.541198 | orchestrator | [WARNING]: Skipped 2025-08-29 17:38:24.541208 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-08-29 17:38:24.541217 | orchestrator | due to this access issue: 2025-08-29 17:38:24.541227 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-08-29 17:38:24.541236 | orchestrator | not a directory 2025-08-29 17:38:24.541246 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:38:24.541255 | orchestrator | 2025-08-29 17:38:24.541265 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-08-29 17:38:24.541274 | orchestrator | Friday 29 August 2025 17:36:19 +0000 (0:00:01.504) 0:01:46.153 ********* 2025-08-29 17:38:24.541289 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:38:24.541298 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:24.541308 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:24.541317 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:24.541326 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:38:24.541336 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:38:24.541345 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:38:24.541354 | orchestrator | 2025-08-29 17:38:24.541364 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-08-29 17:38:24.541373 | orchestrator | Friday 29 August 2025 17:36:20 +0000 (0:00:01.458) 0:01:47.612 ********* 2025-08-29 17:38:24.541433 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:38:24.541445 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:24.541454 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:24.541464 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:24.541473 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:38:24.541483 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:38:24.541492 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:38:24.541502 | orchestrator | 2025-08-29 17:38:24.541511 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-08-29 17:38:24.541521 | orchestrator | Friday 29 August 2025 17:36:22 +0000 (0:00:01.094) 0:01:48.707 ********* 2025-08-29 17:38:24.541531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.541555 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 17:38:24.541564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.541573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.541581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.541595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.541603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.541612 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.541625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.541638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.541647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.541655 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.541664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.541672 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 17:38:24.541687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.541696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.541709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.541721 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.541729 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.541738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.541746 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.541757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.541766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.541779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.541792 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 17:38:24.541802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.541811 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 17:38:24.541819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.541831 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:38:24.541840 | orchestrator | 2025-08-29 17:38:24.541848 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-08-29 17:38:24.541856 | orchestrator | Friday 29 August 2025 17:36:28 +0000 (0:00:06.092) 0:01:54.799 ********* 2025-08-29 17:38:24.541868 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 17:38:24.541876 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:38:24.541884 | orchestrator | 2025-08-29 17:38:24.541892 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 17:38:24.541900 | orchestrator | Friday 29 August 2025 17:36:29 +0000 (0:00:01.498) 0:01:56.297 ********* 2025-08-29 17:38:24.541907 | orchestrator | 2025-08-29 17:38:24.541915 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 17:38:24.541923 | orchestrator | Friday 29 August 2025 17:36:29 +0000 (0:00:00.071) 0:01:56.369 ********* 2025-08-29 17:38:24.541931 | orchestrator | 2025-08-29 17:38:24.541938 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 17:38:24.541946 | orchestrator | Friday 29 August 2025 17:36:29 +0000 (0:00:00.074) 0:01:56.444 ********* 2025-08-29 17:38:24.541954 | orchestrator | 2025-08-29 17:38:24.541961 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 17:38:24.541969 | orchestrator | Friday 29 August 2025 17:36:29 +0000 (0:00:00.068) 0:01:56.512 ********* 2025-08-29 17:38:24.541977 | orchestrator | 2025-08-29 17:38:24.541985 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 17:38:24.541993 | orchestrator | Friday 29 August 2025 17:36:30 +0000 (0:00:00.539) 0:01:57.052 ********* 2025-08-29 17:38:24.542000 | orchestrator | 2025-08-29 17:38:24.542008 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 17:38:24.542037 | orchestrator | Friday 29 August 2025 17:36:30 +0000 (0:00:00.169) 0:01:57.222 ********* 2025-08-29 17:38:24.542047 | orchestrator | 2025-08-29 17:38:24.542055 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 17:38:24.542063 | orchestrator | Friday 29 August 2025 17:36:30 +0000 (0:00:00.143) 0:01:57.365 ********* 2025-08-29 17:38:24.542070 | orchestrator | 2025-08-29 17:38:24.542078 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-08-29 17:38:24.542086 | orchestrator | Friday 29 August 2025 17:36:30 +0000 (0:00:00.188) 0:01:57.553 ********* 2025-08-29 17:38:24.542094 | orchestrator | changed: [testbed-manager] 2025-08-29 17:38:24.542101 | orchestrator | 2025-08-29 17:38:24.542109 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-08-29 17:38:24.542117 | orchestrator | Friday 29 August 2025 17:36:46 +0000 (0:00:16.027) 0:02:13.580 ********* 2025-08-29 17:38:24.542129 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:38:24.542137 | orchestrator | changed: [testbed-manager] 2025-08-29 17:38:24.542145 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:38:24.542152 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:38:24.542160 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:38:24.542168 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:38:24.542175 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:38:24.542183 | orchestrator | 2025-08-29 17:38:24.542191 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-08-29 17:38:24.542199 | orchestrator | Friday 29 August 2025 17:37:06 +0000 (0:00:19.153) 0:02:32.734 ********* 2025-08-29 17:38:24.542206 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:38:24.542214 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:38:24.542222 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:38:24.542230 | orchestrator | 2025-08-29 17:38:24.542238 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-08-29 17:38:24.542246 | orchestrator | Friday 29 August 2025 17:37:15 +0000 (0:00:09.649) 0:02:42.384 ********* 2025-08-29 17:38:24.542253 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:38:24.542261 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:38:24.542269 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:38:24.542276 | orchestrator | 2025-08-29 17:38:24.542284 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-08-29 17:38:24.542292 | orchestrator | Friday 29 August 2025 17:37:28 +0000 (0:00:13.015) 0:02:55.400 ********* 2025-08-29 17:38:24.542305 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:38:24.542313 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:38:24.542320 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:38:24.542328 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:38:24.542336 | orchestrator | changed: [testbed-manager] 2025-08-29 17:38:24.542343 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:38:24.542351 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:38:24.542358 | orchestrator | 2025-08-29 17:38:24.542366 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-08-29 17:38:24.542374 | orchestrator | Friday 29 August 2025 17:37:45 +0000 (0:00:17.055) 0:03:12.455 ********* 2025-08-29 17:38:24.542395 | orchestrator | changed: [testbed-manager] 2025-08-29 17:38:24.542403 | orchestrator | 2025-08-29 17:38:24.542411 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-08-29 17:38:24.542419 | orchestrator | Friday 29 August 2025 17:37:58 +0000 (0:00:13.149) 0:03:25.605 ********* 2025-08-29 17:38:24.542427 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:38:24.542434 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:38:24.542442 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:38:24.542449 | orchestrator | 2025-08-29 17:38:24.542457 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-08-29 17:38:24.542465 | orchestrator | Friday 29 August 2025 17:38:04 +0000 (0:00:05.963) 0:03:31.569 ********* 2025-08-29 17:38:24.542473 | orchestrator | changed: [testbed-manager] 2025-08-29 17:38:24.542481 | orchestrator | 2025-08-29 17:38:24.542488 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-08-29 17:38:24.542496 | orchestrator | Friday 29 August 2025 17:38:10 +0000 (0:00:05.489) 0:03:37.059 ********* 2025-08-29 17:38:24.542504 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:38:24.542512 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:38:24.542519 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:38:24.542527 | orchestrator | 2025-08-29 17:38:24.542539 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:38:24.542547 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 17:38:24.542555 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 17:38:24.542563 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 17:38:24.542572 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 17:38:24.542579 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 17:38:24.542587 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 17:38:24.542595 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 17:38:24.542603 | orchestrator | 2025-08-29 17:38:24.542611 | orchestrator | 2025-08-29 17:38:24.542619 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:38:24.542627 | orchestrator | Friday 29 August 2025 17:38:20 +0000 (0:00:10.555) 0:03:47.614 ********* 2025-08-29 17:38:24.542634 | orchestrator | =============================================================================== 2025-08-29 17:38:24.542642 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 34.46s 2025-08-29 17:38:24.542650 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 20.84s 2025-08-29 17:38:24.542663 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 19.15s 2025-08-29 17:38:24.542671 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 17.06s 2025-08-29 17:38:24.542678 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.03s 2025-08-29 17:38:24.542690 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 13.15s 2025-08-29 17:38:24.542698 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 13.02s 2025-08-29 17:38:24.542706 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.56s 2025-08-29 17:38:24.542714 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.65s 2025-08-29 17:38:24.542721 | orchestrator | prometheus : Check prometheus containers -------------------------------- 6.09s 2025-08-29 17:38:24.542729 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.96s 2025-08-29 17:38:24.542737 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.82s 2025-08-29 17:38:24.542745 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.49s 2025-08-29 17:38:24.542752 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.27s 2025-08-29 17:38:24.542760 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.12s 2025-08-29 17:38:24.542768 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.03s 2025-08-29 17:38:24.542776 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.69s 2025-08-29 17:38:24.542783 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.61s 2025-08-29 17:38:24.542791 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.24s 2025-08-29 17:38:24.542799 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.80s 2025-08-29 17:38:24.542807 | orchestrator | 2025-08-29 17:38:24 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:24.542814 | orchestrator | 2025-08-29 17:38:24 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:24.542822 | orchestrator | 2025-08-29 17:38:24 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:24.542830 | orchestrator | 2025-08-29 17:38:24 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:38:24.542839 | orchestrator | 2025-08-29 17:38:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:27.574089 | orchestrator | 2025-08-29 17:38:27 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:27.576358 | orchestrator | 2025-08-29 17:38:27 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:27.577065 | orchestrator | 2025-08-29 17:38:27 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:27.578160 | orchestrator | 2025-08-29 17:38:27 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:38:27.578217 | orchestrator | 2025-08-29 17:38:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:30.648661 | orchestrator | 2025-08-29 17:38:30 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:30.649533 | orchestrator | 2025-08-29 17:38:30 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:30.650710 | orchestrator | 2025-08-29 17:38:30 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:30.651826 | orchestrator | 2025-08-29 17:38:30 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:38:30.652220 | orchestrator | 2025-08-29 17:38:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:33.703000 | orchestrator | 2025-08-29 17:38:33 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:33.703086 | orchestrator | 2025-08-29 17:38:33 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:33.704222 | orchestrator | 2025-08-29 17:38:33 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:33.705249 | orchestrator | 2025-08-29 17:38:33 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:38:33.705289 | orchestrator | 2025-08-29 17:38:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:36.757583 | orchestrator | 2025-08-29 17:38:36 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:36.758315 | orchestrator | 2025-08-29 17:38:36 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:36.761794 | orchestrator | 2025-08-29 17:38:36 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:36.765046 | orchestrator | 2025-08-29 17:38:36 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:38:36.765129 | orchestrator | 2025-08-29 17:38:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:39.812875 | orchestrator | 2025-08-29 17:38:39 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:39.814523 | orchestrator | 2025-08-29 17:38:39 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:39.816695 | orchestrator | 2025-08-29 17:38:39 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:39.818382 | orchestrator | 2025-08-29 17:38:39 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:38:39.818463 | orchestrator | 2025-08-29 17:38:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:42.859656 | orchestrator | 2025-08-29 17:38:42 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:42.859995 | orchestrator | 2025-08-29 17:38:42 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:42.861277 | orchestrator | 2025-08-29 17:38:42 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:42.862275 | orchestrator | 2025-08-29 17:38:42 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:38:42.862308 | orchestrator | 2025-08-29 17:38:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:45.911788 | orchestrator | 2025-08-29 17:38:45 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:45.915845 | orchestrator | 2025-08-29 17:38:45 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:45.918231 | orchestrator | 2025-08-29 17:38:45 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:45.921922 | orchestrator | 2025-08-29 17:38:45 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:38:45.921971 | orchestrator | 2025-08-29 17:38:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:48.958409 | orchestrator | 2025-08-29 17:38:48 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:49.008244 | orchestrator | 2025-08-29 17:38:48 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state STARTED 2025-08-29 17:38:49.008338 | orchestrator | 2025-08-29 17:38:48 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:49.008352 | orchestrator | 2025-08-29 17:38:48 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:38:49.008451 | orchestrator | 2025-08-29 17:38:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:52.013314 | orchestrator | 2025-08-29 17:38:52 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:52.018529 | orchestrator | 2025-08-29 17:38:52.018576 | orchestrator | 2025-08-29 17:38:52.018590 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:38:52.018602 | orchestrator | 2025-08-29 17:38:52.018614 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:38:52.018625 | orchestrator | Friday 29 August 2025 17:35:11 +0000 (0:00:00.833) 0:00:00.833 ********* 2025-08-29 17:38:52.018636 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:38:52.018649 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:38:52.018659 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:38:52.018670 | orchestrator | 2025-08-29 17:38:52.018681 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:38:52.018692 | orchestrator | Friday 29 August 2025 17:35:12 +0000 (0:00:00.961) 0:00:01.794 ********* 2025-08-29 17:38:52.018703 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-08-29 17:38:52.018715 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-08-29 17:38:52.018726 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-08-29 17:38:52.018736 | orchestrator | 2025-08-29 17:38:52.018748 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-08-29 17:38:52.018758 | orchestrator | 2025-08-29 17:38:52.018769 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 17:38:52.018780 | orchestrator | Friday 29 August 2025 17:35:13 +0000 (0:00:01.416) 0:00:03.211 ********* 2025-08-29 17:38:52.018791 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:38:52.018803 | orchestrator | 2025-08-29 17:38:52.018815 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-08-29 17:38:52.018826 | orchestrator | Friday 29 August 2025 17:35:15 +0000 (0:00:01.688) 0:00:04.900 ********* 2025-08-29 17:38:52.018837 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-08-29 17:38:52.018847 | orchestrator | 2025-08-29 17:38:52.018858 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-08-29 17:38:52.018869 | orchestrator | Friday 29 August 2025 17:35:24 +0000 (0:00:09.012) 0:00:13.913 ********* 2025-08-29 17:38:52.018880 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-08-29 17:38:52.018892 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-08-29 17:38:52.018903 | orchestrator | 2025-08-29 17:38:52.018914 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-08-29 17:38:52.018925 | orchestrator | Friday 29 August 2025 17:35:30 +0000 (0:00:05.503) 0:00:19.417 ********* 2025-08-29 17:38:52.018936 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-08-29 17:38:52.018947 | orchestrator | 2025-08-29 17:38:52.018957 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-08-29 17:38:52.018968 | orchestrator | Friday 29 August 2025 17:35:33 +0000 (0:00:03.253) 0:00:22.670 ********* 2025-08-29 17:38:52.018980 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 17:38:52.018991 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-08-29 17:38:52.019002 | orchestrator | 2025-08-29 17:38:52.019013 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-08-29 17:38:52.019024 | orchestrator | Friday 29 August 2025 17:35:37 +0000 (0:00:03.610) 0:00:26.281 ********* 2025-08-29 17:38:52.019035 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 17:38:52.019046 | orchestrator | 2025-08-29 17:38:52.019057 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-08-29 17:38:52.019094 | orchestrator | Friday 29 August 2025 17:35:40 +0000 (0:00:03.045) 0:00:29.326 ********* 2025-08-29 17:38:52.019106 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-08-29 17:38:52.019117 | orchestrator | 2025-08-29 17:38:52.019128 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-08-29 17:38:52.019140 | orchestrator | Friday 29 August 2025 17:35:44 +0000 (0:00:03.976) 0:00:33.302 ********* 2025-08-29 17:38:52.019215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:38:52.019237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:38:52.019252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:38:52.019275 | orchestrator | 2025-08-29 17:38:52.019289 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 17:38:52.019302 | orchestrator | Friday 29 August 2025 17:35:49 +0000 (0:00:05.423) 0:00:38.726 ********* 2025-08-29 17:38:52.019315 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:38:52.019327 | orchestrator | 2025-08-29 17:38:52.019346 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-08-29 17:38:52.019359 | orchestrator | Friday 29 August 2025 17:35:50 +0000 (0:00:00.768) 0:00:39.495 ********* 2025-08-29 17:38:52.019372 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:38:52.019416 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:38:52.019430 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:38:52.019442 | orchestrator | 2025-08-29 17:38:52.019455 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-08-29 17:38:52.019467 | orchestrator | Friday 29 August 2025 17:35:54 +0000 (0:00:04.543) 0:00:44.038 ********* 2025-08-29 17:38:52.019480 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 17:38:52.019493 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 17:38:52.019505 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 17:38:52.019516 | orchestrator | 2025-08-29 17:38:52.019527 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-08-29 17:38:52.019538 | orchestrator | Friday 29 August 2025 17:35:56 +0000 (0:00:01.678) 0:00:45.717 ********* 2025-08-29 17:38:52.019549 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 17:38:52.019560 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 17:38:52.019571 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 17:38:52.019582 | orchestrator | 2025-08-29 17:38:52.019593 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-08-29 17:38:52.019604 | orchestrator | Friday 29 August 2025 17:35:57 +0000 (0:00:01.099) 0:00:46.816 ********* 2025-08-29 17:38:52.019622 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:38:52.019633 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:38:52.019644 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:38:52.019655 | orchestrator | 2025-08-29 17:38:52.019666 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-08-29 17:38:52.019677 | orchestrator | Friday 29 August 2025 17:35:58 +0000 (0:00:00.661) 0:00:47.478 ********* 2025-08-29 17:38:52.019688 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:52.019704 | orchestrator | 2025-08-29 17:38:52.019722 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-08-29 17:38:52.019735 | orchestrator | Friday 29 August 2025 17:35:59 +0000 (0:00:00.814) 0:00:48.292 ********* 2025-08-29 17:38:52.019746 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:52.019757 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:52.019768 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:52.019778 | orchestrator | 2025-08-29 17:38:52.019789 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 17:38:52.019800 | orchestrator | Friday 29 August 2025 17:35:59 +0000 (0:00:00.768) 0:00:49.060 ********* 2025-08-29 17:38:52.019811 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:38:52.019822 | orchestrator | 2025-08-29 17:38:52.019872 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-08-29 17:38:52.019884 | orchestrator | Friday 29 August 2025 17:36:00 +0000 (0:00:00.930) 0:00:49.991 ********* 2025-08-29 17:38:52.019909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:38:52.019924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:38:52.019945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:38:52.019957 | orchestrator | 2025-08-29 17:38:52.019968 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-08-29 17:38:52.019979 | orchestrator | Friday 29 August 2025 17:36:07 +0000 (0:00:06.968) 0:00:56.959 ********* 2025-08-29 17:38:52.020005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 17:38:52.020025 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:52.020037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 17:38:52.020049 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:52.020073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 17:38:52.020093 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:52.020261 | orchestrator | 2025-08-29 17:38:52.020276 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-08-29 17:38:52.020287 | orchestrator | Friday 29 August 2025 17:36:12 +0000 (0:00:05.007) 0:01:01.966 ********* 2025-08-29 17:38:52.020299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 17:38:52.020311 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:52.020337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 17:38:52.020351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 17:38:52.020371 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:52.020382 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:52.020415 | orchestrator | 2025-08-29 17:38:52.020426 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-08-29 17:38:52.020437 | orchestrator | Friday 29 August 2025 17:36:20 +0000 (0:00:07.490) 0:01:09.457 ********* 2025-08-29 17:38:52.020448 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:52.020459 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:52.020469 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:52.020480 | orchestrator | 2025-08-29 17:38:52.020491 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-08-29 17:38:52.020502 | orchestrator | Friday 29 August 2025 17:36:27 +0000 (0:00:06.916) 0:01:16.374 ********* 2025-08-29 17:38:52.020524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:38:52.020550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:38:52.020563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:38:52.020576 | orchestrator | 2025-08-29 17:38:52.020586 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-08-29 17:38:52.020597 | orchestrator | Friday 29 August 2025 17:36:32 +0000 (0:00:05.706) 0:01:22.081 ********* 2025-08-29 17:38:52.020608 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:38:52.020624 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:38:52.020642 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:38:52.020653 | orchestrator | 2025-08-29 17:38:52.020664 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-08-29 17:38:52.020684 | orchestrator | Friday 29 August 2025 17:36:41 +0000 (0:00:08.760) 0:01:30.841 ********* 2025-08-29 17:38:52.020694 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:52.020710 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:52.020721 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:52.020732 | orchestrator | 2025-08-29 17:38:52.020743 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-08-29 17:38:52.020761 | orchestrator | Friday 292025-08-29 17:38:52 | INFO  | Task ccde2578-4785-4832-9a28-75f40f5a001d is in state SUCCESS 2025-08-29 17:38:52.020773 | orchestrator | August 2025 17:36:48 +0000 (0:00:06.442) 0:01:37.284 ********* 2025-08-29 17:38:52.020784 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:52.020795 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:52.020805 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:52.020816 | orchestrator | 2025-08-29 17:38:52.020827 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-08-29 17:38:52.020837 | orchestrator | Friday 29 August 2025 17:36:59 +0000 (0:00:11.710) 0:01:48.994 ********* 2025-08-29 17:38:52.020848 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:52.020859 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:52.020869 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:52.020880 | orchestrator | 2025-08-29 17:38:52.020891 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-08-29 17:38:52.020903 | orchestrator | Friday 29 August 2025 17:37:07 +0000 (0:00:07.776) 0:01:56.771 ********* 2025-08-29 17:38:52.020916 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:52.020928 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:52.020941 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:52.020953 | orchestrator | 2025-08-29 17:38:52.020965 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-08-29 17:38:52.020978 | orchestrator | Friday 29 August 2025 17:37:14 +0000 (0:00:06.751) 0:02:03.523 ********* 2025-08-29 17:38:52.020991 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:52.021004 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:52.021016 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:52.021028 | orchestrator | 2025-08-29 17:38:52.021041 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-08-29 17:38:52.021053 | orchestrator | Friday 29 August 2025 17:37:14 +0000 (0:00:00.536) 0:02:04.059 ********* 2025-08-29 17:38:52.021065 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 17:38:52.021078 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:52.021090 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 17:38:52.021100 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:52.021111 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 17:38:52.021122 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:52.021133 | orchestrator | 2025-08-29 17:38:52.021144 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-08-29 17:38:52.021155 | orchestrator | Friday 29 August 2025 17:37:20 +0000 (0:00:05.239) 0:02:09.299 ********* 2025-08-29 17:38:52.021167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:38:52.021200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:38:52.021214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:38:52.021233 | orchestrator | 2025-08-29 17:38:52.021244 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 17:38:52.021255 | orchestrator | Friday 29 August 2025 17:37:24 +0000 (0:00:04.643) 0:02:13.942 ********* 2025-08-29 17:38:52.021265 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:38:52.021276 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:38:52.021287 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:38:52.021297 | orchestrator | 2025-08-29 17:38:52.021309 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-08-29 17:38:52.021320 | orchestrator | Friday 29 August 2025 17:37:24 +0000 (0:00:00.281) 0:02:14.224 ********* 2025-08-29 17:38:52.021330 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:38:52.021341 | orchestrator | 2025-08-29 17:38:52.021352 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-08-29 17:38:52.021363 | orchestrator | Friday 29 August 2025 17:37:26 +0000 (0:00:01.925) 0:02:16.149 ********* 2025-08-29 17:38:52.021374 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:38:52.021440 | orchestrator | 2025-08-29 17:38:52.021453 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-08-29 17:38:52.021464 | orchestrator | Friday 29 August 2025 17:37:28 +0000 (0:00:02.040) 0:02:18.190 ********* 2025-08-29 17:38:52.021475 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:38:52.021485 | orchestrator | 2025-08-29 17:38:52.021497 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-08-29 17:38:52.021513 | orchestrator | Friday 29 August 2025 17:37:30 +0000 (0:00:02.043) 0:02:20.233 ********* 2025-08-29 17:38:52.021524 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:38:52.021535 | orchestrator | 2025-08-29 17:38:52.021546 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-08-29 17:38:52.021564 | orchestrator | Friday 29 August 2025 17:37:57 +0000 (0:00:26.156) 0:02:46.390 ********* 2025-08-29 17:38:52.021575 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:38:52.021586 | orchestrator | 2025-08-29 17:38:52.021597 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 17:38:52.021607 | orchestrator | Friday 29 August 2025 17:37:59 +0000 (0:00:02.063) 0:02:48.453 ********* 2025-08-29 17:38:52.021618 | orchestrator | 2025-08-29 17:38:52.021629 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 17:38:52.021639 | orchestrator | Friday 29 August 2025 17:37:59 +0000 (0:00:00.141) 0:02:48.595 ********* 2025-08-29 17:38:52.021648 | orchestrator | 2025-08-29 17:38:52.021658 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 17:38:52.021667 | orchestrator | Friday 29 August 2025 17:37:59 +0000 (0:00:00.131) 0:02:48.726 ********* 2025-08-29 17:38:52.021677 | orchestrator | 2025-08-29 17:38:52.021687 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-08-29 17:38:52.021696 | orchestrator | Friday 29 August 2025 17:37:59 +0000 (0:00:00.106) 0:02:48.833 ********* 2025-08-29 17:38:52.021706 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:38:52.021715 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:38:52.021725 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:38:52.021734 | orchestrator | 2025-08-29 17:38:52.021744 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:38:52.021755 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 17:38:52.021766 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 17:38:52.021783 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 17:38:52.021792 | orchestrator | 2025-08-29 17:38:52.021802 | orchestrator | 2025-08-29 17:38:52.021812 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:38:52.021821 | orchestrator | Friday 29 August 2025 17:38:50 +0000 (0:00:51.183) 0:03:40.016 ********* 2025-08-29 17:38:52.021831 | orchestrator | =============================================================================== 2025-08-29 17:38:52.021840 | orchestrator | glance : Restart glance-api container ---------------------------------- 51.18s 2025-08-29 17:38:52.021850 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.16s 2025-08-29 17:38:52.021859 | orchestrator | glance : Copying over glance-swift.conf for glance_api ----------------- 11.71s 2025-08-29 17:38:52.021869 | orchestrator | service-ks-register : glance | Creating services ------------------------ 9.01s 2025-08-29 17:38:52.021878 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.76s 2025-08-29 17:38:52.021888 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 7.78s 2025-08-29 17:38:52.021897 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 7.49s 2025-08-29 17:38:52.021907 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.97s 2025-08-29 17:38:52.021916 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 6.92s 2025-08-29 17:38:52.021925 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 6.75s 2025-08-29 17:38:52.021935 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.44s 2025-08-29 17:38:52.021944 | orchestrator | glance : Copying over config.json files for services -------------------- 5.71s 2025-08-29 17:38:52.021954 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.50s 2025-08-29 17:38:52.021963 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.42s 2025-08-29 17:38:52.021973 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.24s 2025-08-29 17:38:52.021982 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.01s 2025-08-29 17:38:52.021992 | orchestrator | glance : Check glance containers ---------------------------------------- 4.64s 2025-08-29 17:38:52.022001 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.54s 2025-08-29 17:38:52.022011 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.98s 2025-08-29 17:38:52.022069 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.61s 2025-08-29 17:38:52.022080 | orchestrator | 2025-08-29 17:38:52 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:52.022090 | orchestrator | 2025-08-29 17:38:52 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:38:52.022100 | orchestrator | 2025-08-29 17:38:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:55.085807 | orchestrator | 2025-08-29 17:38:55 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:38:55.088801 | orchestrator | 2025-08-29 17:38:55 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:55.090102 | orchestrator | 2025-08-29 17:38:55 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:55.092039 | orchestrator | 2025-08-29 17:38:55 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:38:55.092054 | orchestrator | 2025-08-29 17:38:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:38:58.138243 | orchestrator | 2025-08-29 17:38:58 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:38:58.140128 | orchestrator | 2025-08-29 17:38:58 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:38:58.142229 | orchestrator | 2025-08-29 17:38:58 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:38:58.143957 | orchestrator | 2025-08-29 17:38:58 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:38:58.143980 | orchestrator | 2025-08-29 17:38:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:01.182700 | orchestrator | 2025-08-29 17:39:01 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:01.184977 | orchestrator | 2025-08-29 17:39:01 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:01.187696 | orchestrator | 2025-08-29 17:39:01 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:01.193949 | orchestrator | 2025-08-29 17:39:01 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:01.193994 | orchestrator | 2025-08-29 17:39:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:04.229744 | orchestrator | 2025-08-29 17:39:04 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:04.230086 | orchestrator | 2025-08-29 17:39:04 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:04.230867 | orchestrator | 2025-08-29 17:39:04 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:04.232325 | orchestrator | 2025-08-29 17:39:04 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:04.232350 | orchestrator | 2025-08-29 17:39:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:07.268850 | orchestrator | 2025-08-29 17:39:07 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:07.269719 | orchestrator | 2025-08-29 17:39:07 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:07.271796 | orchestrator | 2025-08-29 17:39:07 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:07.273506 | orchestrator | 2025-08-29 17:39:07 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:07.273529 | orchestrator | 2025-08-29 17:39:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:10.313915 | orchestrator | 2025-08-29 17:39:10 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:10.314928 | orchestrator | 2025-08-29 17:39:10 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:10.316654 | orchestrator | 2025-08-29 17:39:10 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:10.317736 | orchestrator | 2025-08-29 17:39:10 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:10.318089 | orchestrator | 2025-08-29 17:39:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:13.355927 | orchestrator | 2025-08-29 17:39:13 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:13.356054 | orchestrator | 2025-08-29 17:39:13 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:13.357240 | orchestrator | 2025-08-29 17:39:13 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:13.357932 | orchestrator | 2025-08-29 17:39:13 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:13.358002 | orchestrator | 2025-08-29 17:39:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:16.409649 | orchestrator | 2025-08-29 17:39:16 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:16.410538 | orchestrator | 2025-08-29 17:39:16 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:16.410572 | orchestrator | 2025-08-29 17:39:16 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:16.410608 | orchestrator | 2025-08-29 17:39:16 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:16.410620 | orchestrator | 2025-08-29 17:39:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:19.445856 | orchestrator | 2025-08-29 17:39:19 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:19.446584 | orchestrator | 2025-08-29 17:39:19 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:19.448090 | orchestrator | 2025-08-29 17:39:19 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:19.449607 | orchestrator | 2025-08-29 17:39:19 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:19.449655 | orchestrator | 2025-08-29 17:39:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:22.489802 | orchestrator | 2025-08-29 17:39:22 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:22.489890 | orchestrator | 2025-08-29 17:39:22 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:22.489900 | orchestrator | 2025-08-29 17:39:22 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:22.490222 | orchestrator | 2025-08-29 17:39:22 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:22.490243 | orchestrator | 2025-08-29 17:39:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:25.529653 | orchestrator | 2025-08-29 17:39:25 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:25.530159 | orchestrator | 2025-08-29 17:39:25 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:25.531652 | orchestrator | 2025-08-29 17:39:25 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:25.532303 | orchestrator | 2025-08-29 17:39:25 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:25.532335 | orchestrator | 2025-08-29 17:39:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:28.572018 | orchestrator | 2025-08-29 17:39:28 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:28.572231 | orchestrator | 2025-08-29 17:39:28 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:28.573158 | orchestrator | 2025-08-29 17:39:28 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:28.574128 | orchestrator | 2025-08-29 17:39:28 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:28.574154 | orchestrator | 2025-08-29 17:39:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:31.616676 | orchestrator | 2025-08-29 17:39:31 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:31.618417 | orchestrator | 2025-08-29 17:39:31 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:31.618635 | orchestrator | 2025-08-29 17:39:31 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:31.619601 | orchestrator | 2025-08-29 17:39:31 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:31.619647 | orchestrator | 2025-08-29 17:39:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:34.677735 | orchestrator | 2025-08-29 17:39:34 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:34.678862 | orchestrator | 2025-08-29 17:39:34 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:34.679991 | orchestrator | 2025-08-29 17:39:34 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:34.681504 | orchestrator | 2025-08-29 17:39:34 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:34.681544 | orchestrator | 2025-08-29 17:39:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:37.727756 | orchestrator | 2025-08-29 17:39:37 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:37.728871 | orchestrator | 2025-08-29 17:39:37 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:37.731360 | orchestrator | 2025-08-29 17:39:37 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:37.733966 | orchestrator | 2025-08-29 17:39:37 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:37.733996 | orchestrator | 2025-08-29 17:39:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:40.763798 | orchestrator | 2025-08-29 17:39:40 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:40.764310 | orchestrator | 2025-08-29 17:39:40 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:40.765516 | orchestrator | 2025-08-29 17:39:40 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:40.766472 | orchestrator | 2025-08-29 17:39:40 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:40.766498 | orchestrator | 2025-08-29 17:39:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:43.801966 | orchestrator | 2025-08-29 17:39:43 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:43.803518 | orchestrator | 2025-08-29 17:39:43 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:43.803551 | orchestrator | 2025-08-29 17:39:43 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:43.804161 | orchestrator | 2025-08-29 17:39:43 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:43.804188 | orchestrator | 2025-08-29 17:39:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:46.834920 | orchestrator | 2025-08-29 17:39:46 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:46.835452 | orchestrator | 2025-08-29 17:39:46 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:46.836050 | orchestrator | 2025-08-29 17:39:46 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:46.837206 | orchestrator | 2025-08-29 17:39:46 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:46.837231 | orchestrator | 2025-08-29 17:39:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:49.876301 | orchestrator | 2025-08-29 17:39:49 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:49.877780 | orchestrator | 2025-08-29 17:39:49 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:49.878471 | orchestrator | 2025-08-29 17:39:49 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:49.879421 | orchestrator | 2025-08-29 17:39:49 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:49.879455 | orchestrator | 2025-08-29 17:39:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:52.921494 | orchestrator | 2025-08-29 17:39:52 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:52.923328 | orchestrator | 2025-08-29 17:39:52 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:52.924330 | orchestrator | 2025-08-29 17:39:52 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:52.925511 | orchestrator | 2025-08-29 17:39:52 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:52.925534 | orchestrator | 2025-08-29 17:39:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:55.976522 | orchestrator | 2025-08-29 17:39:55 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:55.980704 | orchestrator | 2025-08-29 17:39:55 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:55.981833 | orchestrator | 2025-08-29 17:39:55 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:55.984458 | orchestrator | 2025-08-29 17:39:55 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:55.984484 | orchestrator | 2025-08-29 17:39:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:39:59.036652 | orchestrator | 2025-08-29 17:39:59 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:39:59.037766 | orchestrator | 2025-08-29 17:39:59 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state STARTED 2025-08-29 17:39:59.038587 | orchestrator | 2025-08-29 17:39:59 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:39:59.040736 | orchestrator | 2025-08-29 17:39:59 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:39:59.040768 | orchestrator | 2025-08-29 17:39:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:02.071086 | orchestrator | 2025-08-29 17:40:02 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:02.073111 | orchestrator | 2025-08-29 17:40:02 | INFO  | Task d5a4705b-f97e-4175-8a33-ff9ebf945f82 is in state SUCCESS 2025-08-29 17:40:02.075539 | orchestrator | 2025-08-29 17:40:02.075582 | orchestrator | 2025-08-29 17:40:02.075594 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:40:02.075606 | orchestrator | 2025-08-29 17:40:02.075617 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:40:02.075629 | orchestrator | Friday 29 August 2025 17:35:43 +0000 (0:00:00.292) 0:00:00.292 ********* 2025-08-29 17:40:02.075640 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:40:02.075651 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:40:02.075662 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:40:02.075673 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:40:02.075683 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:40:02.075694 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:40:02.075704 | orchestrator | 2025-08-29 17:40:02.075715 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:40:02.075726 | orchestrator | Friday 29 August 2025 17:35:43 +0000 (0:00:00.861) 0:00:01.153 ********* 2025-08-29 17:40:02.075759 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-08-29 17:40:02.075771 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-08-29 17:40:02.075782 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-08-29 17:40:02.075793 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-08-29 17:40:02.075803 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-08-29 17:40:02.075815 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-08-29 17:40:02.075826 | orchestrator | 2025-08-29 17:40:02.075837 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-08-29 17:40:02.075847 | orchestrator | 2025-08-29 17:40:02.075858 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 17:40:02.075869 | orchestrator | Friday 29 August 2025 17:35:45 +0000 (0:00:01.559) 0:00:02.713 ********* 2025-08-29 17:40:02.075881 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:40:02.075892 | orchestrator | 2025-08-29 17:40:02.075903 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-08-29 17:40:02.075914 | orchestrator | Friday 29 August 2025 17:35:47 +0000 (0:00:02.147) 0:00:04.860 ********* 2025-08-29 17:40:02.075925 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-08-29 17:40:02.075936 | orchestrator | 2025-08-29 17:40:02.075946 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-08-29 17:40:02.075957 | orchestrator | Friday 29 August 2025 17:35:51 +0000 (0:00:03.678) 0:00:08.538 ********* 2025-08-29 17:40:02.075968 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-08-29 17:40:02.075979 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-08-29 17:40:02.075990 | orchestrator | 2025-08-29 17:40:02.076001 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-08-29 17:40:02.076012 | orchestrator | Friday 29 August 2025 17:35:57 +0000 (0:00:06.048) 0:00:14.587 ********* 2025-08-29 17:40:02.076023 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 17:40:02.076033 | orchestrator | 2025-08-29 17:40:02.076044 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-08-29 17:40:02.076055 | orchestrator | Friday 29 August 2025 17:36:00 +0000 (0:00:02.731) 0:00:17.319 ********* 2025-08-29 17:40:02.076065 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 17:40:02.076076 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-08-29 17:40:02.076087 | orchestrator | 2025-08-29 17:40:02.076098 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-08-29 17:40:02.076109 | orchestrator | Friday 29 August 2025 17:36:03 +0000 (0:00:03.586) 0:00:20.905 ********* 2025-08-29 17:40:02.076120 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 17:40:02.076130 | orchestrator | 2025-08-29 17:40:02.076141 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-08-29 17:40:02.076152 | orchestrator | Friday 29 August 2025 17:36:06 +0000 (0:00:03.049) 0:00:23.955 ********* 2025-08-29 17:40:02.076165 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-08-29 17:40:02.076178 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-08-29 17:40:02.076190 | orchestrator | 2025-08-29 17:40:02.076203 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-08-29 17:40:02.076216 | orchestrator | Friday 29 August 2025 17:36:13 +0000 (0:00:06.785) 0:00:30.740 ********* 2025-08-29 17:40:02.076238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:40:02.076305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:40:02.076320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:40:02.076333 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.076344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.076356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.076431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.076578 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.076596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.076608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.076619 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.076655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.076667 | orchestrator | 2025-08-29 17:40:02.076710 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 17:40:02.076724 | orchestrator | Friday 29 August 2025 17:36:17 +0000 (0:00:04.257) 0:00:34.998 ********* 2025-08-29 17:40:02.076735 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:40:02.076746 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:40:02.076757 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:40:02.076768 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:40:02.076779 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:40:02.076790 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:40:02.076801 | orchestrator | 2025-08-29 17:40:02.076812 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 17:40:02.076822 | orchestrator | Friday 29 August 2025 17:36:18 +0000 (0:00:01.085) 0:00:36.084 ********* 2025-08-29 17:40:02.076833 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:40:02.076844 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:40:02.076855 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:40:02.076866 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:40:02.076877 | orchestrator | 2025-08-29 17:40:02.076887 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-08-29 17:40:02.076898 | orchestrator | Friday 29 August 2025 17:36:19 +0000 (0:00:01.126) 0:00:37.210 ********* 2025-08-29 17:40:02.076909 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-08-29 17:40:02.076920 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-08-29 17:40:02.076930 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-08-29 17:40:02.076941 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-08-29 17:40:02.076952 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-08-29 17:40:02.076962 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-08-29 17:40:02.076973 | orchestrator | 2025-08-29 17:40:02.076984 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-08-29 17:40:02.076995 | orchestrator | Friday 29 August 2025 17:36:22 +0000 (0:00:02.513) 0:00:39.723 ********* 2025-08-29 17:40:02.077007 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 17:40:02.077026 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 17:40:02.077049 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 17:40:02.077090 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 17:40:02.077104 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 17:40:02.077115 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 17:40:02.077127 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 17:40:02.077150 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 17:40:02.077190 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 17:40:02.077204 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 17:40:02.077216 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 17:40:02.077237 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 17:40:02.077251 | orchestrator | 2025-08-29 17:40:02.077263 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-08-29 17:40:02.077276 | orchestrator | Friday 29 August 2025 17:36:27 +0000 (0:00:05.033) 0:00:44.756 ********* 2025-08-29 17:40:02.077289 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 17:40:02.077303 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 17:40:02.077316 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 17:40:02.077328 | orchestrator | 2025-08-29 17:40:02.077341 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-08-29 17:40:02.077354 | orchestrator | Friday 29 August 2025 17:36:30 +0000 (0:00:02.909) 0:00:47.666 ********* 2025-08-29 17:40:02.077366 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-08-29 17:40:02.077379 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-08-29 17:40:02.077413 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-08-29 17:40:02.077427 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 17:40:02.077440 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 17:40:02.077484 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 17:40:02.077498 | orchestrator | 2025-08-29 17:40:02.077511 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-08-29 17:40:02.077524 | orchestrator | Friday 29 August 2025 17:36:33 +0000 (0:00:03.238) 0:00:50.905 ********* 2025-08-29 17:40:02.077536 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-08-29 17:40:02.077549 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-08-29 17:40:02.077561 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-08-29 17:40:02.077574 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-08-29 17:40:02.077585 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-08-29 17:40:02.077596 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-08-29 17:40:02.077607 | orchestrator | 2025-08-29 17:40:02.077618 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-08-29 17:40:02.077629 | orchestrator | Friday 29 August 2025 17:36:34 +0000 (0:00:01.146) 0:00:52.051 ********* 2025-08-29 17:40:02.077639 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:40:02.077650 | orchestrator | 2025-08-29 17:40:02.077661 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-08-29 17:40:02.077672 | orchestrator | Friday 29 August 2025 17:36:34 +0000 (0:00:00.173) 0:00:52.225 ********* 2025-08-29 17:40:02.077683 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:40:02.077694 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:40:02.077704 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:40:02.077722 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:40:02.077733 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:40:02.077743 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:40:02.077754 | orchestrator | 2025-08-29 17:40:02.077765 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 17:40:02.077776 | orchestrator | Friday 29 August 2025 17:36:36 +0000 (0:00:01.455) 0:00:53.680 ********* 2025-08-29 17:40:02.077788 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:40:02.077799 | orchestrator | 2025-08-29 17:40:02.077810 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-08-29 17:40:02.077821 | orchestrator | Friday 29 August 2025 17:36:37 +0000 (0:00:01.479) 0:00:55.160 ********* 2025-08-29 17:40:02.077832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:40:02.077845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:40:02.077891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:40:02.077906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.077924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.077936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.077947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.077963 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.078006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.078066 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.078088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.078100 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.078111 | orchestrator | 2025-08-29 17:40:02.078123 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-08-29 17:40:02.078134 | orchestrator | Friday 29 August 2025 17:36:42 +0000 (0:00:04.298) 0:00:59.458 ********* 2025-08-29 17:40:02.078145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 17:40:02.078192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078206 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:40:02.078217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 17:40:02.078236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078247 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:40:02.078259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 17:40:02.078270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078285 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:40:02.078301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078336 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:40:02.078348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078371 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:40:02.078382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078434 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:40:02.078445 | orchestrator | 2025-08-29 17:40:02.078461 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-08-29 17:40:02.078483 | orchestrator | Friday 29 August 2025 17:36:45 +0000 (0:00:02.998) 0:01:02.456 ********* 2025-08-29 17:40:02.078503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 17:40:02.078516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 17:40:02.078539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078550 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:40:02.078562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078602 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:40:02.078613 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:40:02.078625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 17:40:02.078648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078676 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:40:02.078688 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:40:02.078709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.078732 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:40:02.078743 | orchestrator | 2025-08-29 17:40:02.078754 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-08-29 17:40:02.078766 | orchestrator | Friday 29 August 2025 17:36:50 +0000 (0:00:05.146) 0:01:07.603 ********* 2025-08-29 17:40:02.078777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:40:02.078789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:40:02.078801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:40:02.078833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.078846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.078858 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.078869 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.078881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.078903 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.078921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.078933 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.078945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.078956 | orchestrator | 2025-08-29 17:40:02.078967 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-08-29 17:40:02.078979 | orchestrator | Friday 29 August 2025 17:36:56 +0000 (0:00:06.355) 0:01:13.958 ********* 2025-08-29 17:40:02.078990 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 17:40:02.079001 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 17:40:02.079012 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:40:02.079023 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 17:40:02.079035 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:40:02.079046 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 17:40:02.079063 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:40:02.079074 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 17:40:02.079084 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 17:40:02.079095 | orchestrator | 2025-08-29 17:40:02.079106 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-08-29 17:40:02.079118 | orchestrator | Friday 29 August 2025 17:37:00 +0000 (0:00:03.864) 0:01:17.822 ********* 2025-08-29 17:40:02.079133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:40:02.079152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:40:02.079164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.079176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:40:02.079193 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.079214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.079227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.079238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.079250 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.079261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.079279 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.079295 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.079306 | orchestrator | 2025-08-29 17:40:02.079318 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-08-29 17:40:02.079329 | orchestrator | Friday 29 August 2025 17:37:14 +0000 (0:00:13.612) 0:01:31.435 ********* 2025-08-29 17:40:02.079345 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:40:02.079356 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:40:02.079368 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:40:02.079378 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:40:02.079389 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:40:02.079417 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:40:02.079428 | orchestrator | 2025-08-29 17:40:02.079439 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-08-29 17:40:02.079450 | orchestrator | Friday 29 August 2025 17:37:16 +0000 (0:00:02.183) 0:01:33.618 ********* 2025-08-29 17:40:02.079462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 17:40:02.079473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.079495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 17:40:02.079506 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:40:02.079518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.079529 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:40:02.079551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 17:40:02.079563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.079575 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:40:02.079586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.079604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.079616 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:40:02.079627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.079643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.079655 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:40:02.079673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.079685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:40:02.079703 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:40:02.079715 | orchestrator | 2025-08-29 17:40:02.079726 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-08-29 17:40:02.079737 | orchestrator | Friday 29 August 2025 17:37:19 +0000 (0:00:03.099) 0:01:36.717 ********* 2025-08-29 17:40:02.079748 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:40:02.079759 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:40:02.079769 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:40:02.079780 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:40:02.079791 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:40:02.079801 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:40:02.079812 | orchestrator | 2025-08-29 17:40:02.079823 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-08-29 17:40:02.079835 | orchestrator | Friday 29 August 2025 17:37:20 +0000 (0:00:00.574) 0:01:37.292 ********* 2025-08-29 17:40:02.079846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:40:02.079862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:40:02.079881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:40:02.079893 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.079911 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.079922 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.079934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.079955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.079967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.079985 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.079997 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.080008 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 17:40:02.080020 | orchestrator | 2025-08-29 17:40:02.080031 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 17:40:02.080041 | orchestrator | Friday 29 August 2025 17:37:23 +0000 (0:00:03.068) 0:01:40.361 ********* 2025-08-29 17:40:02.080052 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:40:02.080064 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:40:02.080075 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:40:02.080085 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:40:02.080096 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:40:02.080107 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:40:02.080117 | orchestrator | 2025-08-29 17:40:02.080128 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-08-29 17:40:02.080139 | orchestrator | Friday 29 August 2025 17:37:23 +0000 (0:00:00.551) 0:01:40.912 ********* 2025-08-29 17:40:02.080150 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:40:02.080161 | orchestrator | 2025-08-29 17:40:02.080171 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-08-29 17:40:02.080182 | orchestrator | Friday 29 August 2025 17:37:25 +0000 (0:00:02.241) 0:01:43.154 ********* 2025-08-29 17:40:02.080193 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:40:02.080204 | orchestrator | 2025-08-29 17:40:02.080215 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-08-29 17:40:02.080230 | orchestrator | Friday 29 August 2025 17:37:27 +0000 (0:00:02.062) 0:01:45.217 ********* 2025-08-29 17:40:02.080241 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:40:02.080252 | orchestrator | 2025-08-29 17:40:02.080266 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 17:40:02.080278 | orchestrator | Friday 29 August 2025 17:37:45 +0000 (0:00:17.949) 0:02:03.166 ********* 2025-08-29 17:40:02.080288 | orchestrator | 2025-08-29 17:40:02.080304 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 17:40:02.080315 | orchestrator | Friday 29 August 2025 17:37:45 +0000 (0:00:00.070) 0:02:03.237 ********* 2025-08-29 17:40:02.080326 | orchestrator | 2025-08-29 17:40:02.080337 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 17:40:02.080347 | orchestrator | Friday 29 August 2025 17:37:46 +0000 (0:00:00.063) 0:02:03.300 ********* 2025-08-29 17:40:02.080358 | orchestrator | 2025-08-29 17:40:02.080369 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 17:40:02.080380 | orchestrator | Friday 29 August 2025 17:37:46 +0000 (0:00:00.069) 0:02:03.370 ********* 2025-08-29 17:40:02.080391 | orchestrator | 2025-08-29 17:40:02.080451 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 17:40:02.080462 | orchestrator | Friday 29 August 2025 17:37:46 +0000 (0:00:00.068) 0:02:03.439 ********* 2025-08-29 17:40:02.080473 | orchestrator | 2025-08-29 17:40:02.080484 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 17:40:02.080495 | orchestrator | Friday 29 August 2025 17:37:46 +0000 (0:00:00.075) 0:02:03.514 ********* 2025-08-29 17:40:02.080506 | orchestrator | 2025-08-29 17:40:02.080517 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-08-29 17:40:02.080528 | orchestrator | Friday 29 August 2025 17:37:46 +0000 (0:00:00.069) 0:02:03.584 ********* 2025-08-29 17:40:02.080539 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:40:02.080549 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:40:02.080559 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:40:02.080569 | orchestrator | 2025-08-29 17:40:02.080579 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-08-29 17:40:02.080589 | orchestrator | Friday 29 August 2025 17:38:10 +0000 (0:00:23.936) 0:02:27.521 ********* 2025-08-29 17:40:02.080598 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:40:02.080608 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:40:02.080618 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:40:02.080627 | orchestrator | 2025-08-29 17:40:02.080637 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-08-29 17:40:02.080647 | orchestrator | Friday 29 August 2025 17:38:21 +0000 (0:00:11.329) 0:02:38.850 ********* 2025-08-29 17:40:02.080657 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:40:02.080666 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:40:02.080676 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:40:02.080686 | orchestrator | 2025-08-29 17:40:02.080695 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-08-29 17:40:02.080705 | orchestrator | Friday 29 August 2025 17:39:45 +0000 (0:01:23.649) 0:04:02.499 ********* 2025-08-29 17:40:02.080715 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:40:02.080724 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:40:02.080734 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:40:02.080743 | orchestrator | 2025-08-29 17:40:02.080753 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-08-29 17:40:02.080763 | orchestrator | Friday 29 August 2025 17:39:59 +0000 (0:00:14.437) 0:04:16.936 ********* 2025-08-29 17:40:02.080773 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:40:02.080782 | orchestrator | 2025-08-29 17:40:02.080792 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:40:02.080802 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 17:40:02.080813 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 17:40:02.080829 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 17:40:02.080839 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 17:40:02.080849 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 17:40:02.080859 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 17:40:02.080869 | orchestrator | 2025-08-29 17:40:02.080878 | orchestrator | 2025-08-29 17:40:02.080888 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:40:02.080898 | orchestrator | Friday 29 August 2025 17:40:00 +0000 (0:00:00.806) 0:04:17.743 ********* 2025-08-29 17:40:02.080908 | orchestrator | =============================================================================== 2025-08-29 17:40:02.080918 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 83.65s 2025-08-29 17:40:02.080927 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 23.94s 2025-08-29 17:40:02.080937 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.95s 2025-08-29 17:40:02.080947 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 14.44s 2025-08-29 17:40:02.080957 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.61s 2025-08-29 17:40:02.080970 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.33s 2025-08-29 17:40:02.080980 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.79s 2025-08-29 17:40:02.080990 | orchestrator | cinder : Copying over config.json files for services -------------------- 6.36s 2025-08-29 17:40:02.081006 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.05s 2025-08-29 17:40:02.081016 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 5.15s 2025-08-29 17:40:02.081026 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.03s 2025-08-29 17:40:02.081036 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.30s 2025-08-29 17:40:02.081045 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 4.26s 2025-08-29 17:40:02.081055 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.86s 2025-08-29 17:40:02.081065 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.68s 2025-08-29 17:40:02.081074 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.59s 2025-08-29 17:40:02.081096 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.24s 2025-08-29 17:40:02.081106 | orchestrator | cinder : Copying over existing policy file ------------------------------ 3.10s 2025-08-29 17:40:02.081116 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.07s 2025-08-29 17:40:02.081125 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.05s 2025-08-29 17:40:02.081135 | orchestrator | 2025-08-29 17:40:02 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:02.081145 | orchestrator | 2025-08-29 17:40:02 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:02.081155 | orchestrator | 2025-08-29 17:40:02 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:02.081164 | orchestrator | 2025-08-29 17:40:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:05.117220 | orchestrator | 2025-08-29 17:40:05 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:05.117824 | orchestrator | 2025-08-29 17:40:05 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:05.118527 | orchestrator | 2025-08-29 17:40:05 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:05.121559 | orchestrator | 2025-08-29 17:40:05 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:05.121606 | orchestrator | 2025-08-29 17:40:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:08.151969 | orchestrator | 2025-08-29 17:40:08 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:08.155030 | orchestrator | 2025-08-29 17:40:08 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:08.155904 | orchestrator | 2025-08-29 17:40:08 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:08.156557 | orchestrator | 2025-08-29 17:40:08 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:08.156597 | orchestrator | 2025-08-29 17:40:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:11.207271 | orchestrator | 2025-08-29 17:40:11 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:11.207882 | orchestrator | 2025-08-29 17:40:11 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:11.208638 | orchestrator | 2025-08-29 17:40:11 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:11.209930 | orchestrator | 2025-08-29 17:40:11 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:11.209955 | orchestrator | 2025-08-29 17:40:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:14.250149 | orchestrator | 2025-08-29 17:40:14 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:14.251858 | orchestrator | 2025-08-29 17:40:14 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:14.252435 | orchestrator | 2025-08-29 17:40:14 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:14.253560 | orchestrator | 2025-08-29 17:40:14 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:14.253585 | orchestrator | 2025-08-29 17:40:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:17.306232 | orchestrator | 2025-08-29 17:40:17 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:17.307156 | orchestrator | 2025-08-29 17:40:17 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:17.308101 | orchestrator | 2025-08-29 17:40:17 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:17.309241 | orchestrator | 2025-08-29 17:40:17 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:17.309271 | orchestrator | 2025-08-29 17:40:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:20.350479 | orchestrator | 2025-08-29 17:40:20 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:20.350677 | orchestrator | 2025-08-29 17:40:20 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:20.351720 | orchestrator | 2025-08-29 17:40:20 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:20.353068 | orchestrator | 2025-08-29 17:40:20 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:20.353083 | orchestrator | 2025-08-29 17:40:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:23.391517 | orchestrator | 2025-08-29 17:40:23 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:23.392534 | orchestrator | 2025-08-29 17:40:23 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:23.393714 | orchestrator | 2025-08-29 17:40:23 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:23.395484 | orchestrator | 2025-08-29 17:40:23 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:23.395522 | orchestrator | 2025-08-29 17:40:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:26.434572 | orchestrator | 2025-08-29 17:40:26 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:26.436153 | orchestrator | 2025-08-29 17:40:26 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:26.437866 | orchestrator | 2025-08-29 17:40:26 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:26.439552 | orchestrator | 2025-08-29 17:40:26 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:26.439604 | orchestrator | 2025-08-29 17:40:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:29.477208 | orchestrator | 2025-08-29 17:40:29 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:29.477720 | orchestrator | 2025-08-29 17:40:29 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:29.479949 | orchestrator | 2025-08-29 17:40:29 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:29.482274 | orchestrator | 2025-08-29 17:40:29 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:29.482305 | orchestrator | 2025-08-29 17:40:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:32.523481 | orchestrator | 2025-08-29 17:40:32 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:32.524328 | orchestrator | 2025-08-29 17:40:32 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:32.525642 | orchestrator | 2025-08-29 17:40:32 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:32.526697 | orchestrator | 2025-08-29 17:40:32 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:32.526715 | orchestrator | 2025-08-29 17:40:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:35.570294 | orchestrator | 2025-08-29 17:40:35 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:35.570778 | orchestrator | 2025-08-29 17:40:35 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:35.571805 | orchestrator | 2025-08-29 17:40:35 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:35.572900 | orchestrator | 2025-08-29 17:40:35 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:35.572923 | orchestrator | 2025-08-29 17:40:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:38.601194 | orchestrator | 2025-08-29 17:40:38 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:38.601579 | orchestrator | 2025-08-29 17:40:38 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:38.602330 | orchestrator | 2025-08-29 17:40:38 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:38.603259 | orchestrator | 2025-08-29 17:40:38 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:38.603280 | orchestrator | 2025-08-29 17:40:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:41.638237 | orchestrator | 2025-08-29 17:40:41 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:41.638625 | orchestrator | 2025-08-29 17:40:41 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:41.639599 | orchestrator | 2025-08-29 17:40:41 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:41.640385 | orchestrator | 2025-08-29 17:40:41 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:41.640449 | orchestrator | 2025-08-29 17:40:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:44.689010 | orchestrator | 2025-08-29 17:40:44 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:44.689466 | orchestrator | 2025-08-29 17:40:44 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:44.690431 | orchestrator | 2025-08-29 17:40:44 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:44.691356 | orchestrator | 2025-08-29 17:40:44 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:44.691384 | orchestrator | 2025-08-29 17:40:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:47.730316 | orchestrator | 2025-08-29 17:40:47 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:47.731370 | orchestrator | 2025-08-29 17:40:47 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:47.732942 | orchestrator | 2025-08-29 17:40:47 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:47.733818 | orchestrator | 2025-08-29 17:40:47 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:47.733911 | orchestrator | 2025-08-29 17:40:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:50.775786 | orchestrator | 2025-08-29 17:40:50 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:50.778982 | orchestrator | 2025-08-29 17:40:50 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:50.782795 | orchestrator | 2025-08-29 17:40:50 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:50.784922 | orchestrator | 2025-08-29 17:40:50 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:50.786090 | orchestrator | 2025-08-29 17:40:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:53.811441 | orchestrator | 2025-08-29 17:40:53 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:53.812162 | orchestrator | 2025-08-29 17:40:53 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:53.813064 | orchestrator | 2025-08-29 17:40:53 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:53.814144 | orchestrator | 2025-08-29 17:40:53 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:53.814169 | orchestrator | 2025-08-29 17:40:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:56.855454 | orchestrator | 2025-08-29 17:40:56 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:56.856947 | orchestrator | 2025-08-29 17:40:56 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:56.858328 | orchestrator | 2025-08-29 17:40:56 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:56.859478 | orchestrator | 2025-08-29 17:40:56 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:56.859625 | orchestrator | 2025-08-29 17:40:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:40:59.890829 | orchestrator | 2025-08-29 17:40:59 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:40:59.891432 | orchestrator | 2025-08-29 17:40:59 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:40:59.892330 | orchestrator | 2025-08-29 17:40:59 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:40:59.893549 | orchestrator | 2025-08-29 17:40:59 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:40:59.893572 | orchestrator | 2025-08-29 17:40:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:02.926382 | orchestrator | 2025-08-29 17:41:02 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:41:02.926848 | orchestrator | 2025-08-29 17:41:02 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:02.927850 | orchestrator | 2025-08-29 17:41:02 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:02.928834 | orchestrator | 2025-08-29 17:41:02 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:02.928856 | orchestrator | 2025-08-29 17:41:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:05.968466 | orchestrator | 2025-08-29 17:41:05 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state STARTED 2025-08-29 17:41:05.969729 | orchestrator | 2025-08-29 17:41:05 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:05.970939 | orchestrator | 2025-08-29 17:41:05 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:05.972965 | orchestrator | 2025-08-29 17:41:05 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:05.972992 | orchestrator | 2025-08-29 17:41:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:09.011731 | orchestrator | 2025-08-29 17:41:09 | INFO  | Task ee51af5b-ea40-46f3-b64b-2d36bed9ddbe is in state SUCCESS 2025-08-29 17:41:09.012612 | orchestrator | 2025-08-29 17:41:09.012716 | orchestrator | 2025-08-29 17:41:09.013155 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:41:09.013171 | orchestrator | 2025-08-29 17:41:09.013183 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:41:09.013195 | orchestrator | Friday 29 August 2025 17:38:55 +0000 (0:00:00.297) 0:00:00.297 ********* 2025-08-29 17:41:09.013206 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:41:09.013218 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:41:09.013229 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:41:09.013241 | orchestrator | 2025-08-29 17:41:09.013252 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:41:09.013264 | orchestrator | Friday 29 August 2025 17:38:56 +0000 (0:00:00.334) 0:00:00.631 ********* 2025-08-29 17:41:09.013276 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-08-29 17:41:09.013287 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-08-29 17:41:09.013298 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-08-29 17:41:09.013309 | orchestrator | 2025-08-29 17:41:09.013320 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-08-29 17:41:09.013355 | orchestrator | 2025-08-29 17:41:09.013367 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 17:41:09.013378 | orchestrator | Friday 29 August 2025 17:38:56 +0000 (0:00:00.490) 0:00:01.122 ********* 2025-08-29 17:41:09.013389 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:41:09.013400 | orchestrator | 2025-08-29 17:41:09.013433 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-08-29 17:41:09.013444 | orchestrator | Friday 29 August 2025 17:38:57 +0000 (0:00:00.679) 0:00:01.802 ********* 2025-08-29 17:41:09.013455 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-08-29 17:41:09.013466 | orchestrator | 2025-08-29 17:41:09.013477 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-08-29 17:41:09.013488 | orchestrator | Friday 29 August 2025 17:39:00 +0000 (0:00:03.118) 0:00:04.920 ********* 2025-08-29 17:41:09.013498 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-08-29 17:41:09.013509 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-08-29 17:41:09.013598 | orchestrator | 2025-08-29 17:41:09.013611 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-08-29 17:41:09.013622 | orchestrator | Friday 29 August 2025 17:39:06 +0000 (0:00:05.897) 0:00:10.818 ********* 2025-08-29 17:41:09.013633 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 17:41:09.013644 | orchestrator | 2025-08-29 17:41:09.013655 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-08-29 17:41:09.013666 | orchestrator | Friday 29 August 2025 17:39:09 +0000 (0:00:03.048) 0:00:13.868 ********* 2025-08-29 17:41:09.013677 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 17:41:09.013688 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-08-29 17:41:09.013699 | orchestrator | 2025-08-29 17:41:09.013710 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-08-29 17:41:09.013721 | orchestrator | Friday 29 August 2025 17:39:13 +0000 (0:00:03.893) 0:00:17.761 ********* 2025-08-29 17:41:09.013732 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 17:41:09.013744 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-08-29 17:41:09.013755 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-08-29 17:41:09.013778 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-08-29 17:41:09.013790 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-08-29 17:41:09.013801 | orchestrator | 2025-08-29 17:41:09.013812 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-08-29 17:41:09.013823 | orchestrator | Friday 29 August 2025 17:39:28 +0000 (0:00:14.900) 0:00:32.662 ********* 2025-08-29 17:41:09.013834 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-08-29 17:41:09.013845 | orchestrator | 2025-08-29 17:41:09.013856 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-08-29 17:41:09.013867 | orchestrator | Friday 29 August 2025 17:39:32 +0000 (0:00:04.212) 0:00:36.874 ********* 2025-08-29 17:41:09.013905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:41:09.013944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:41:09.013957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:41:09.013970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.013988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.013999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.014103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.014132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.014152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.014171 | orchestrator | 2025-08-29 17:41:09.014183 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-08-29 17:41:09.014194 | orchestrator | Friday 29 August 2025 17:39:35 +0000 (0:00:03.402) 0:00:40.277 ********* 2025-08-29 17:41:09.014205 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-08-29 17:41:09.014217 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-08-29 17:41:09.014230 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-08-29 17:41:09.014243 | orchestrator | 2025-08-29 17:41:09.014256 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-08-29 17:41:09.014268 | orchestrator | Friday 29 August 2025 17:39:37 +0000 (0:00:01.390) 0:00:41.667 ********* 2025-08-29 17:41:09.014281 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:41:09.014293 | orchestrator | 2025-08-29 17:41:09.014306 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-08-29 17:41:09.014319 | orchestrator | Friday 29 August 2025 17:39:37 +0000 (0:00:00.145) 0:00:41.813 ********* 2025-08-29 17:41:09.014331 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:41:09.014343 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:41:09.014354 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:41:09.014365 | orchestrator | 2025-08-29 17:41:09.014375 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 17:41:09.014386 | orchestrator | Friday 29 August 2025 17:39:37 +0000 (0:00:00.621) 0:00:42.435 ********* 2025-08-29 17:41:09.014397 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:41:09.014465 | orchestrator | 2025-08-29 17:41:09.014485 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-08-29 17:41:09.014496 | orchestrator | Friday 29 August 2025 17:39:38 +0000 (0:00:00.993) 0:00:43.428 ********* 2025-08-29 17:41:09.014508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:41:09.014538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:41:09.014550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:41:09.014562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.014578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.014590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.014625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.014645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.014657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.014668 | orchestrator | 2025-08-29 17:41:09.014679 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-08-29 17:41:09.014691 | orchestrator | Friday 29 August 2025 17:39:43 +0000 (0:00:04.600) 0:00:48.029 ********* 2025-08-29 17:41:09.014703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 17:41:09.014720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.014738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.014750 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:41:09.014768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 17:41:09.014780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.014907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.014928 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:41:09.014945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 17:41:09.014983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.015000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.015011 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:41:09.015020 | orchestrator | 2025-08-29 17:41:09.015030 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-08-29 17:41:09.015040 | orchestrator | Friday 29 August 2025 17:39:45 +0000 (0:00:02.334) 0:00:50.363 ********* 2025-08-29 17:41:09.015058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 17:41:09.015069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.015079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.015089 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:41:09.015109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 17:41:09.015125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.015136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.015146 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:41:09.015162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 17:41:09.015173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.015184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.015217 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:41:09.015228 | orchestrator | 2025-08-29 17:41:09.015237 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-08-29 17:41:09.015247 | orchestrator | Friday 29 August 2025 17:39:47 +0000 (0:00:01.620) 0:00:51.983 ********* 2025-08-29 17:41:09.015261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:41:09.015277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:41:09.015288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:41:09.015298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.015314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.015327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.015338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.015353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.015363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.015373 | orchestrator | 2025-08-29 17:41:09.015383 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-08-29 17:41:09.015393 | orchestrator | Friday 29 August 2025 17:39:52 +0000 (0:00:05.591) 0:00:57.575 ********* 2025-08-29 17:41:09.015403 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:41:09.015431 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:41:09.015441 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:41:09.015451 | orchestrator | 2025-08-29 17:41:09.015460 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-08-29 17:41:09.015470 | orchestrator | Friday 29 August 2025 17:39:56 +0000 (0:00:03.841) 0:01:01.417 ********* 2025-08-29 17:41:09.015480 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 17:41:09.015496 | orchestrator | 2025-08-29 17:41:09.015506 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-08-29 17:41:09.015515 | orchestrator | Friday 29 August 2025 17:39:58 +0000 (0:00:01.554) 0:01:02.971 ********* 2025-08-29 17:41:09.015526 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:41:09.015538 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:41:09.015549 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:41:09.015560 | orchestrator | 2025-08-29 17:41:09.015571 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-08-29 17:41:09.015582 | orchestrator | Friday 29 August 2025 17:39:59 +0000 (0:00:01.519) 0:01:04.491 ********* 2025-08-29 17:41:09.015594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:41:09.015610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:41:09.015628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:41:09.015641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.015670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.015682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.015698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.015710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.015792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.015807 | orchestrator | 2025-08-29 17:41:09.015817 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-08-29 17:41:09.015827 | orchestrator | Friday 29 August 2025 17:40:10 +0000 (0:00:10.981) 0:01:15.472 ********* 2025-08-29 17:41:09.015845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 17:41:09.015862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.015873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.015883 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:41:09.015897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 17:41:09.015908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.015923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.015939 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:41:09.015949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 17:41:09.015959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.015970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:41:09.015980 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:41:09.015990 | orchestrator | 2025-08-29 17:41:09.016004 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-08-29 17:41:09.016014 | orchestrator | Friday 29 August 2025 17:40:12 +0000 (0:00:01.271) 0:01:16.743 ********* 2025-08-29 17:41:09.016024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:41:09.016040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:41:09.016056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.016066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.016076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.016093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.016104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:41:09.016120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.016136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:41:09.016146 | orchestrator | 2025-08-29 17:41:09.016156 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 17:41:09.016165 | orchestrator | Friday 29 August 2025 17:40:17 +0000 (0:00:05.294) 0:01:22.038 ********* 2025-08-29 17:41:09.016175 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:41:09.016185 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:41:09.016194 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:41:09.016204 | orchestrator | 2025-08-29 17:41:09.016213 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-08-29 17:41:09.016223 | orchestrator | Friday 29 August 2025 17:40:18 +0000 (0:00:00.760) 0:01:22.799 ********* 2025-08-29 17:41:09.016233 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:41:09.016242 | orchestrator | 2025-08-29 17:41:09.016252 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-08-29 17:41:09.016261 | orchestrator | Friday 29 August 2025 17:40:20 +0000 (0:00:02.107) 0:01:24.906 ********* 2025-08-29 17:41:09.016271 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:41:09.016280 | orchestrator | 2025-08-29 17:41:09.016290 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-08-29 17:41:09.016300 | orchestrator | Friday 29 August 2025 17:40:22 +0000 (0:00:02.334) 0:01:27.241 ********* 2025-08-29 17:41:09.016309 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:41:09.016319 | orchestrator | 2025-08-29 17:41:09.016328 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 17:41:09.016338 | orchestrator | Friday 29 August 2025 17:40:34 +0000 (0:00:11.792) 0:01:39.033 ********* 2025-08-29 17:41:09.016348 | orchestrator | 2025-08-29 17:41:09.016357 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 17:41:09.016367 | orchestrator | Friday 29 August 2025 17:40:34 +0000 (0:00:00.074) 0:01:39.107 ********* 2025-08-29 17:41:09.016376 | orchestrator | 2025-08-29 17:41:09.016386 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 17:41:09.016396 | orchestrator | Friday 29 August 2025 17:40:34 +0000 (0:00:00.104) 0:01:39.212 ********* 2025-08-29 17:41:09.016420 | orchestrator | 2025-08-29 17:41:09.016430 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-08-29 17:41:09.016440 | orchestrator | Friday 29 August 2025 17:40:34 +0000 (0:00:00.090) 0:01:39.303 ********* 2025-08-29 17:41:09.016454 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:41:09.016464 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:41:09.016473 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:41:09.016483 | orchestrator | 2025-08-29 17:41:09.016493 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-08-29 17:41:09.016502 | orchestrator | Friday 29 August 2025 17:40:43 +0000 (0:00:08.640) 0:01:47.944 ********* 2025-08-29 17:41:09.016512 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:41:09.016528 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:41:09.016537 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:41:09.016547 | orchestrator | 2025-08-29 17:41:09.016556 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-08-29 17:41:09.016566 | orchestrator | Friday 29 August 2025 17:40:53 +0000 (0:00:10.074) 0:01:58.018 ********* 2025-08-29 17:41:09.016575 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:41:09.016585 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:41:09.016594 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:41:09.016604 | orchestrator | 2025-08-29 17:41:09.016614 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:41:09.016624 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 17:41:09.016634 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:41:09.016644 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:41:09.016654 | orchestrator | 2025-08-29 17:41:09.016663 | orchestrator | 2025-08-29 17:41:09.016673 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:41:09.016683 | orchestrator | Friday 29 August 2025 17:41:07 +0000 (0:00:13.597) 0:02:11.615 ********* 2025-08-29 17:41:09.016692 | orchestrator | =============================================================================== 2025-08-29 17:41:09.016702 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.90s 2025-08-29 17:41:09.016717 | orchestrator | barbican : Restart barbican-worker container --------------------------- 13.60s 2025-08-29 17:41:09.016727 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.79s 2025-08-29 17:41:09.016736 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.98s 2025-08-29 17:41:09.016746 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.07s 2025-08-29 17:41:09.016756 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.64s 2025-08-29 17:41:09.016765 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 5.90s 2025-08-29 17:41:09.016775 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.59s 2025-08-29 17:41:09.016784 | orchestrator | barbican : Check barbican containers ------------------------------------ 5.29s 2025-08-29 17:41:09.016794 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.60s 2025-08-29 17:41:09.016803 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.21s 2025-08-29 17:41:09.016813 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.89s 2025-08-29 17:41:09.016822 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.84s 2025-08-29 17:41:09.016832 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.40s 2025-08-29 17:41:09.016841 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.12s 2025-08-29 17:41:09.016851 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.05s 2025-08-29 17:41:09.016860 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.33s 2025-08-29 17:41:09.016870 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.33s 2025-08-29 17:41:09.016880 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.11s 2025-08-29 17:41:09.016889 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.62s 2025-08-29 17:41:09.016899 | orchestrator | 2025-08-29 17:41:09 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:09.016909 | orchestrator | 2025-08-29 17:41:09 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:09.016924 | orchestrator | 2025-08-29 17:41:09 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:09.017290 | orchestrator | 2025-08-29 17:41:09 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:09.018563 | orchestrator | 2025-08-29 17:41:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:12.050985 | orchestrator | 2025-08-29 17:41:12 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:12.051712 | orchestrator | 2025-08-29 17:41:12 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:12.052852 | orchestrator | 2025-08-29 17:41:12 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:12.056114 | orchestrator | 2025-08-29 17:41:12 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:12.056192 | orchestrator | 2025-08-29 17:41:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:15.162658 | orchestrator | 2025-08-29 17:41:15 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:15.162974 | orchestrator | 2025-08-29 17:41:15 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:15.164879 | orchestrator | 2025-08-29 17:41:15 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:15.165718 | orchestrator | 2025-08-29 17:41:15 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:15.165780 | orchestrator | 2025-08-29 17:41:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:18.197463 | orchestrator | 2025-08-29 17:41:18 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:18.198254 | orchestrator | 2025-08-29 17:41:18 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:18.199113 | orchestrator | 2025-08-29 17:41:18 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:18.200043 | orchestrator | 2025-08-29 17:41:18 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:18.200063 | orchestrator | 2025-08-29 17:41:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:21.246643 | orchestrator | 2025-08-29 17:41:21 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:21.247365 | orchestrator | 2025-08-29 17:41:21 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:21.248688 | orchestrator | 2025-08-29 17:41:21 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:21.249323 | orchestrator | 2025-08-29 17:41:21 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:21.249350 | orchestrator | 2025-08-29 17:41:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:24.276623 | orchestrator | 2025-08-29 17:41:24 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:24.278493 | orchestrator | 2025-08-29 17:41:24 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:24.278522 | orchestrator | 2025-08-29 17:41:24 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:24.278529 | orchestrator | 2025-08-29 17:41:24 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:24.278535 | orchestrator | 2025-08-29 17:41:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:27.302631 | orchestrator | 2025-08-29 17:41:27 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:27.302740 | orchestrator | 2025-08-29 17:41:27 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:27.303175 | orchestrator | 2025-08-29 17:41:27 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:27.303792 | orchestrator | 2025-08-29 17:41:27 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:27.303822 | orchestrator | 2025-08-29 17:41:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:30.345711 | orchestrator | 2025-08-29 17:41:30 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:30.347629 | orchestrator | 2025-08-29 17:41:30 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:30.348853 | orchestrator | 2025-08-29 17:41:30 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:30.350277 | orchestrator | 2025-08-29 17:41:30 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:30.350313 | orchestrator | 2025-08-29 17:41:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:33.389850 | orchestrator | 2025-08-29 17:41:33 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:33.392026 | orchestrator | 2025-08-29 17:41:33 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:33.395026 | orchestrator | 2025-08-29 17:41:33 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:33.397158 | orchestrator | 2025-08-29 17:41:33 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:33.397951 | orchestrator | 2025-08-29 17:41:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:36.448989 | orchestrator | 2025-08-29 17:41:36 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:36.450977 | orchestrator | 2025-08-29 17:41:36 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:36.451044 | orchestrator | 2025-08-29 17:41:36 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:36.451955 | orchestrator | 2025-08-29 17:41:36 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:36.451978 | orchestrator | 2025-08-29 17:41:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:39.491297 | orchestrator | 2025-08-29 17:41:39 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:39.491591 | orchestrator | 2025-08-29 17:41:39 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:39.492802 | orchestrator | 2025-08-29 17:41:39 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:39.494118 | orchestrator | 2025-08-29 17:41:39 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:39.494153 | orchestrator | 2025-08-29 17:41:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:42.533503 | orchestrator | 2025-08-29 17:41:42 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:42.536395 | orchestrator | 2025-08-29 17:41:42 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:42.540040 | orchestrator | 2025-08-29 17:41:42 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:42.542332 | orchestrator | 2025-08-29 17:41:42 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:42.542694 | orchestrator | 2025-08-29 17:41:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:45.597065 | orchestrator | 2025-08-29 17:41:45 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:45.597703 | orchestrator | 2025-08-29 17:41:45 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:45.599740 | orchestrator | 2025-08-29 17:41:45 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:45.600322 | orchestrator | 2025-08-29 17:41:45 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:45.600350 | orchestrator | 2025-08-29 17:41:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:48.639608 | orchestrator | 2025-08-29 17:41:48 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:48.640127 | orchestrator | 2025-08-29 17:41:48 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:48.640951 | orchestrator | 2025-08-29 17:41:48 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:48.641923 | orchestrator | 2025-08-29 17:41:48 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:48.641967 | orchestrator | 2025-08-29 17:41:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:51.675539 | orchestrator | 2025-08-29 17:41:51 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:51.675795 | orchestrator | 2025-08-29 17:41:51 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:51.676596 | orchestrator | 2025-08-29 17:41:51 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:51.677622 | orchestrator | 2025-08-29 17:41:51 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:51.677719 | orchestrator | 2025-08-29 17:41:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:54.712485 | orchestrator | 2025-08-29 17:41:54 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:54.712706 | orchestrator | 2025-08-29 17:41:54 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:54.713352 | orchestrator | 2025-08-29 17:41:54 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:54.714351 | orchestrator | 2025-08-29 17:41:54 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:54.714375 | orchestrator | 2025-08-29 17:41:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:41:57.746365 | orchestrator | 2025-08-29 17:41:57 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:41:57.746791 | orchestrator | 2025-08-29 17:41:57 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:41:57.747758 | orchestrator | 2025-08-29 17:41:57 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:41:57.748504 | orchestrator | 2025-08-29 17:41:57 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:41:57.748535 | orchestrator | 2025-08-29 17:41:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:00.778286 | orchestrator | 2025-08-29 17:42:00 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:00.779014 | orchestrator | 2025-08-29 17:42:00 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:00.780432 | orchestrator | 2025-08-29 17:42:00 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:00.782974 | orchestrator | 2025-08-29 17:42:00 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state STARTED 2025-08-29 17:42:00.783009 | orchestrator | 2025-08-29 17:42:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:03.819018 | orchestrator | 2025-08-29 17:42:03 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:03.819129 | orchestrator | 2025-08-29 17:42:03 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:03.819615 | orchestrator | 2025-08-29 17:42:03 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:03.820281 | orchestrator | 2025-08-29 17:42:03 | INFO  | Task 2f9bfd05-f9a6-4392-a744-7f1313225c7a is in state SUCCESS 2025-08-29 17:42:03.820308 | orchestrator | 2025-08-29 17:42:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:06.853172 | orchestrator | 2025-08-29 17:42:06 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:06.853791 | orchestrator | 2025-08-29 17:42:06 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:06.854863 | orchestrator | 2025-08-29 17:42:06 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:06.857114 | orchestrator | 2025-08-29 17:42:06 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:06.857174 | orchestrator | 2025-08-29 17:42:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:09.891291 | orchestrator | 2025-08-29 17:42:09 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:09.893250 | orchestrator | 2025-08-29 17:42:09 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:09.895534 | orchestrator | 2025-08-29 17:42:09 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:09.898469 | orchestrator | 2025-08-29 17:42:09 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:09.898541 | orchestrator | 2025-08-29 17:42:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:12.940480 | orchestrator | 2025-08-29 17:42:12 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:12.940710 | orchestrator | 2025-08-29 17:42:12 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:12.941871 | orchestrator | 2025-08-29 17:42:12 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:12.946009 | orchestrator | 2025-08-29 17:42:12 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:12.946113 | orchestrator | 2025-08-29 17:42:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:15.985937 | orchestrator | 2025-08-29 17:42:15 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:15.988128 | orchestrator | 2025-08-29 17:42:15 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:15.990218 | orchestrator | 2025-08-29 17:42:15 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:15.992061 | orchestrator | 2025-08-29 17:42:15 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:15.992574 | orchestrator | 2025-08-29 17:42:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:19.036303 | orchestrator | 2025-08-29 17:42:19 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:19.036494 | orchestrator | 2025-08-29 17:42:19 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:19.036513 | orchestrator | 2025-08-29 17:42:19 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:19.036524 | orchestrator | 2025-08-29 17:42:19 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:19.036535 | orchestrator | 2025-08-29 17:42:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:22.079732 | orchestrator | 2025-08-29 17:42:22 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:22.082813 | orchestrator | 2025-08-29 17:42:22 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:22.086216 | orchestrator | 2025-08-29 17:42:22 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:22.088226 | orchestrator | 2025-08-29 17:42:22 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:22.088278 | orchestrator | 2025-08-29 17:42:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:25.121375 | orchestrator | 2025-08-29 17:42:25 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:25.123169 | orchestrator | 2025-08-29 17:42:25 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:25.126462 | orchestrator | 2025-08-29 17:42:25 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:25.129308 | orchestrator | 2025-08-29 17:42:25 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:25.130113 | orchestrator | 2025-08-29 17:42:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:28.168040 | orchestrator | 2025-08-29 17:42:28 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:28.173509 | orchestrator | 2025-08-29 17:42:28 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:28.177705 | orchestrator | 2025-08-29 17:42:28 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:28.181364 | orchestrator | 2025-08-29 17:42:28 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:28.181411 | orchestrator | 2025-08-29 17:42:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:31.226457 | orchestrator | 2025-08-29 17:42:31 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:31.226564 | orchestrator | 2025-08-29 17:42:31 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:31.227122 | orchestrator | 2025-08-29 17:42:31 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:31.227947 | orchestrator | 2025-08-29 17:42:31 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:31.227969 | orchestrator | 2025-08-29 17:42:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:34.268546 | orchestrator | 2025-08-29 17:42:34 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:34.268965 | orchestrator | 2025-08-29 17:42:34 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:34.271782 | orchestrator | 2025-08-29 17:42:34 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:34.272674 | orchestrator | 2025-08-29 17:42:34 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:34.272708 | orchestrator | 2025-08-29 17:42:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:37.312481 | orchestrator | 2025-08-29 17:42:37 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:37.312585 | orchestrator | 2025-08-29 17:42:37 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:37.313407 | orchestrator | 2025-08-29 17:42:37 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:37.313798 | orchestrator | 2025-08-29 17:42:37 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:37.313829 | orchestrator | 2025-08-29 17:42:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:40.348603 | orchestrator | 2025-08-29 17:42:40 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:40.350126 | orchestrator | 2025-08-29 17:42:40 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:40.350756 | orchestrator | 2025-08-29 17:42:40 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:40.351539 | orchestrator | 2025-08-29 17:42:40 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:40.351569 | orchestrator | 2025-08-29 17:42:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:43.393402 | orchestrator | 2025-08-29 17:42:43 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:43.393631 | orchestrator | 2025-08-29 17:42:43 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:43.394795 | orchestrator | 2025-08-29 17:42:43 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:43.395816 | orchestrator | 2025-08-29 17:42:43 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:43.396093 | orchestrator | 2025-08-29 17:42:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:46.446546 | orchestrator | 2025-08-29 17:42:46 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:46.448935 | orchestrator | 2025-08-29 17:42:46 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:46.450465 | orchestrator | 2025-08-29 17:42:46 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:46.451271 | orchestrator | 2025-08-29 17:42:46 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:46.451345 | orchestrator | 2025-08-29 17:42:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:49.494297 | orchestrator | 2025-08-29 17:42:49 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:49.495295 | orchestrator | 2025-08-29 17:42:49 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:49.497156 | orchestrator | 2025-08-29 17:42:49 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:49.498264 | orchestrator | 2025-08-29 17:42:49 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:49.498320 | orchestrator | 2025-08-29 17:42:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:52.541975 | orchestrator | 2025-08-29 17:42:52 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:52.543222 | orchestrator | 2025-08-29 17:42:52 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:52.545824 | orchestrator | 2025-08-29 17:42:52 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:52.546906 | orchestrator | 2025-08-29 17:42:52 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:52.546991 | orchestrator | 2025-08-29 17:42:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:55.588115 | orchestrator | 2025-08-29 17:42:55 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:55.588642 | orchestrator | 2025-08-29 17:42:55 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:55.589831 | orchestrator | 2025-08-29 17:42:55 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:55.592029 | orchestrator | 2025-08-29 17:42:55 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:55.592059 | orchestrator | 2025-08-29 17:42:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:42:58.629729 | orchestrator | 2025-08-29 17:42:58 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:42:58.630127 | orchestrator | 2025-08-29 17:42:58 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:42:58.631057 | orchestrator | 2025-08-29 17:42:58 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:42:58.632193 | orchestrator | 2025-08-29 17:42:58 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:42:58.632250 | orchestrator | 2025-08-29 17:42:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:01.686247 | orchestrator | 2025-08-29 17:43:01 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:43:01.687149 | orchestrator | 2025-08-29 17:43:01 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:01.689173 | orchestrator | 2025-08-29 17:43:01 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:43:01.692886 | orchestrator | 2025-08-29 17:43:01 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:43:01.692959 | orchestrator | 2025-08-29 17:43:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:04.739680 | orchestrator | 2025-08-29 17:43:04 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:43:04.742638 | orchestrator | 2025-08-29 17:43:04 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:04.746196 | orchestrator | 2025-08-29 17:43:04 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:43:04.747704 | orchestrator | 2025-08-29 17:43:04 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:43:04.747738 | orchestrator | 2025-08-29 17:43:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:07.791309 | orchestrator | 2025-08-29 17:43:07 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:43:07.794526 | orchestrator | 2025-08-29 17:43:07 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:07.796222 | orchestrator | 2025-08-29 17:43:07 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:43:07.798343 | orchestrator | 2025-08-29 17:43:07 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:43:07.798395 | orchestrator | 2025-08-29 17:43:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:10.836746 | orchestrator | 2025-08-29 17:43:10 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:43:10.838296 | orchestrator | 2025-08-29 17:43:10 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:10.839342 | orchestrator | 2025-08-29 17:43:10 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:43:10.840278 | orchestrator | 2025-08-29 17:43:10 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:43:10.840307 | orchestrator | 2025-08-29 17:43:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:13.891536 | orchestrator | 2025-08-29 17:43:13 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:43:13.892205 | orchestrator | 2025-08-29 17:43:13 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:13.893531 | orchestrator | 2025-08-29 17:43:13 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:43:13.894948 | orchestrator | 2025-08-29 17:43:13 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:43:13.894982 | orchestrator | 2025-08-29 17:43:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:16.930158 | orchestrator | 2025-08-29 17:43:16 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:43:16.930738 | orchestrator | 2025-08-29 17:43:16 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:16.931941 | orchestrator | 2025-08-29 17:43:16 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:43:16.932773 | orchestrator | 2025-08-29 17:43:16 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:43:16.932871 | orchestrator | 2025-08-29 17:43:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:19.974171 | orchestrator | 2025-08-29 17:43:19 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:43:19.974917 | orchestrator | 2025-08-29 17:43:19 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:19.975541 | orchestrator | 2025-08-29 17:43:19 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:43:19.976400 | orchestrator | 2025-08-29 17:43:19 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:43:19.976463 | orchestrator | 2025-08-29 17:43:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:23.041760 | orchestrator | 2025-08-29 17:43:23 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:43:23.041834 | orchestrator | 2025-08-29 17:43:23 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:23.041858 | orchestrator | 2025-08-29 17:43:23 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:43:23.041867 | orchestrator | 2025-08-29 17:43:23 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:43:23.041875 | orchestrator | 2025-08-29 17:43:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:26.073978 | orchestrator | 2025-08-29 17:43:26 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:43:26.077328 | orchestrator | 2025-08-29 17:43:26 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:26.079265 | orchestrator | 2025-08-29 17:43:26 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:43:26.081574 | orchestrator | 2025-08-29 17:43:26 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:43:26.081912 | orchestrator | 2025-08-29 17:43:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:29.129073 | orchestrator | 2025-08-29 17:43:29 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:43:29.130737 | orchestrator | 2025-08-29 17:43:29 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:29.132342 | orchestrator | 2025-08-29 17:43:29 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:43:29.133687 | orchestrator | 2025-08-29 17:43:29 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:43:29.133713 | orchestrator | 2025-08-29 17:43:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:32.177848 | orchestrator | 2025-08-29 17:43:32 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:43:32.178979 | orchestrator | 2025-08-29 17:43:32 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:32.180271 | orchestrator | 2025-08-29 17:43:32 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state STARTED 2025-08-29 17:43:32.181387 | orchestrator | 2025-08-29 17:43:32 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:43:32.181664 | orchestrator | 2025-08-29 17:43:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:35.270259 | orchestrator | 2025-08-29 17:43:35 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:43:35.270950 | orchestrator | 2025-08-29 17:43:35 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:43:35.272374 | orchestrator | 2025-08-29 17:43:35 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:35.274362 | orchestrator | 2025-08-29 17:43:35 | INFO  | Task 90f0dcaf-7cea-4402-a274-a8d3a5098c52 is in state SUCCESS 2025-08-29 17:43:35.274518 | orchestrator | 2025-08-29 17:43:35.274626 | orchestrator | 2025-08-29 17:43:35.274641 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-08-29 17:43:35.274652 | orchestrator | 2025-08-29 17:43:35.274663 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-08-29 17:43:35.274674 | orchestrator | Friday 29 August 2025 17:41:19 +0000 (0:00:00.382) 0:00:00.382 ********* 2025-08-29 17:43:35.274697 | orchestrator | changed: [localhost] 2025-08-29 17:43:35.274708 | orchestrator | 2025-08-29 17:43:35.274718 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-08-29 17:43:35.274728 | orchestrator | Friday 29 August 2025 17:41:22 +0000 (0:00:02.764) 0:00:03.146 ********* 2025-08-29 17:43:35.274738 | orchestrator | changed: [localhost] 2025-08-29 17:43:35.274748 | orchestrator | 2025-08-29 17:43:35.274757 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-08-29 17:43:35.274767 | orchestrator | Friday 29 August 2025 17:41:56 +0000 (0:00:34.184) 0:00:37.330 ********* 2025-08-29 17:43:35.274777 | orchestrator | changed: [localhost] 2025-08-29 17:43:35.274787 | orchestrator | 2025-08-29 17:43:35.274797 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:43:35.274806 | orchestrator | 2025-08-29 17:43:35.274816 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:43:35.274826 | orchestrator | Friday 29 August 2025 17:42:02 +0000 (0:00:06.203) 0:00:43.534 ********* 2025-08-29 17:43:35.274836 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:43:35.274845 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:43:35.274855 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:43:35.274865 | orchestrator | 2025-08-29 17:43:35.274875 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:43:35.274885 | orchestrator | Friday 29 August 2025 17:42:02 +0000 (0:00:00.325) 0:00:43.859 ********* 2025-08-29 17:43:35.274926 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-08-29 17:43:35.274983 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-08-29 17:43:35.274995 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-08-29 17:43:35.275006 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-08-29 17:43:35.275050 | orchestrator | 2025-08-29 17:43:35.275061 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-08-29 17:43:35.275072 | orchestrator | skipping: no hosts matched 2025-08-29 17:43:35.275096 | orchestrator | 2025-08-29 17:43:35.275107 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:43:35.275118 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:43:35.275132 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:43:35.275147 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:43:35.275160 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:43:35.275172 | orchestrator | 2025-08-29 17:43:35.275184 | orchestrator | 2025-08-29 17:43:35.275199 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:43:35.275218 | orchestrator | Friday 29 August 2025 17:42:03 +0000 (0:00:00.490) 0:00:44.349 ********* 2025-08-29 17:43:35.275237 | orchestrator | =============================================================================== 2025-08-29 17:43:35.275259 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 34.19s 2025-08-29 17:43:35.275361 | orchestrator | Download ironic-agent kernel -------------------------------------------- 6.20s 2025-08-29 17:43:35.275374 | orchestrator | Ensure the destination directory exists --------------------------------- 2.76s 2025-08-29 17:43:35.275402 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2025-08-29 17:43:35.275428 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-08-29 17:43:35.275475 | orchestrator | 2025-08-29 17:43:35.275848 | orchestrator | 2025-08-29 17:43:35.275872 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:43:35.275884 | orchestrator | 2025-08-29 17:43:35.275895 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:43:35.275906 | orchestrator | Friday 29 August 2025 17:42:10 +0000 (0:00:00.294) 0:00:00.294 ********* 2025-08-29 17:43:35.275916 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:43:35.275961 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:43:35.275975 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:43:35.276001 | orchestrator | 2025-08-29 17:43:35.276013 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:43:35.276024 | orchestrator | Friday 29 August 2025 17:42:10 +0000 (0:00:00.329) 0:00:00.624 ********* 2025-08-29 17:43:35.276035 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-08-29 17:43:35.276046 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-08-29 17:43:35.276058 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-08-29 17:43:35.276069 | orchestrator | 2025-08-29 17:43:35.276080 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-08-29 17:43:35.276090 | orchestrator | 2025-08-29 17:43:35.276101 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 17:43:35.276112 | orchestrator | Friday 29 August 2025 17:42:11 +0000 (0:00:00.790) 0:00:01.415 ********* 2025-08-29 17:43:35.276123 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:43:35.276133 | orchestrator | 2025-08-29 17:43:35.276160 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-08-29 17:43:35.276171 | orchestrator | Friday 29 August 2025 17:42:12 +0000 (0:00:01.435) 0:00:02.850 ********* 2025-08-29 17:43:35.276182 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-08-29 17:43:35.276192 | orchestrator | 2025-08-29 17:43:35.276203 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-08-29 17:43:35.276214 | orchestrator | Friday 29 August 2025 17:42:16 +0000 (0:00:03.771) 0:00:06.622 ********* 2025-08-29 17:43:35.276225 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-08-29 17:43:35.276236 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-08-29 17:43:35.276247 | orchestrator | 2025-08-29 17:43:35.276258 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-08-29 17:43:35.276269 | orchestrator | Friday 29 August 2025 17:42:23 +0000 (0:00:06.322) 0:00:12.944 ********* 2025-08-29 17:43:35.276280 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 17:43:35.276290 | orchestrator | 2025-08-29 17:43:35.276301 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-08-29 17:43:35.276312 | orchestrator | Friday 29 August 2025 17:42:26 +0000 (0:00:03.109) 0:00:16.053 ********* 2025-08-29 17:43:35.276323 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 17:43:35.276333 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-08-29 17:43:35.276344 | orchestrator | 2025-08-29 17:43:35.276355 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-08-29 17:43:35.276366 | orchestrator | Friday 29 August 2025 17:42:29 +0000 (0:00:03.706) 0:00:19.759 ********* 2025-08-29 17:43:35.276377 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 17:43:35.276388 | orchestrator | 2025-08-29 17:43:35.276399 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-08-29 17:43:35.276553 | orchestrator | Friday 29 August 2025 17:42:33 +0000 (0:00:03.282) 0:00:23.042 ********* 2025-08-29 17:43:35.276585 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-08-29 17:43:35.276598 | orchestrator | 2025-08-29 17:43:35.276610 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 17:43:35.276622 | orchestrator | Friday 29 August 2025 17:42:37 +0000 (0:00:04.498) 0:00:27.541 ********* 2025-08-29 17:43:35.276638 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:35.276652 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:35.276664 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:35.276676 | orchestrator | 2025-08-29 17:43:35.276688 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-08-29 17:43:35.276700 | orchestrator | Friday 29 August 2025 17:42:38 +0000 (0:00:00.573) 0:00:28.114 ********* 2025-08-29 17:43:35.276716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:43:35.276750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:43:35.276774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:43:35.276787 | orchestrator | 2025-08-29 17:43:35.276799 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-08-29 17:43:35.276811 | orchestrator | Friday 29 August 2025 17:42:39 +0000 (0:00:01.534) 0:00:29.649 ********* 2025-08-29 17:43:35.276823 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:35.276834 | orchestrator | 2025-08-29 17:43:35.276846 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-08-29 17:43:35.276856 | orchestrator | Friday 29 August 2025 17:42:40 +0000 (0:00:00.277) 0:00:29.927 ********* 2025-08-29 17:43:35.276867 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:35.276878 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:35.276889 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:35.276899 | orchestrator | 2025-08-29 17:43:35.276910 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 17:43:35.276921 | orchestrator | Friday 29 August 2025 17:42:41 +0000 (0:00:01.384) 0:00:31.311 ********* 2025-08-29 17:43:35.276932 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:43:35.276943 | orchestrator | 2025-08-29 17:43:35.276954 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-08-29 17:43:35.276965 | orchestrator | Friday 29 August 2025 17:42:42 +0000 (0:00:01.305) 0:00:32.617 ********* 2025-08-29 17:43:35.276982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:43:35.277002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:43:35.277021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:43:35.277032 | orchestrator | 2025-08-29 17:43:35.277043 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-08-29 17:43:35.277054 | orchestrator | Friday 29 August 2025 17:42:45 +0000 (0:00:03.165) 0:00:35.782 ********* 2025-08-29 17:43:35.277065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 17:43:35.277076 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:35.277093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 17:43:35.277105 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:35.277122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 17:43:35.277140 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:35.277151 | orchestrator | 2025-08-29 17:43:35.277162 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-08-29 17:43:35.277173 | orchestrator | Friday 29 August 2025 17:42:47 +0000 (0:00:01.884) 0:00:37.667 ********* 2025-08-29 17:43:35.277184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 17:43:35.277195 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:35.277207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 17:43:35.277218 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:35.277264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 17:43:35.277283 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:35.277294 | orchestrator | 2025-08-29 17:43:35.277305 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-08-29 17:43:35.277316 | orchestrator | Friday 29 August 2025 17:42:48 +0000 (0:00:00.738) 0:00:38.406 ********* 2025-08-29 17:43:35.277332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:43:35.277344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:43:35.277356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:43:35.277367 | orchestrator | 2025-08-29 17:43:35.277378 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-08-29 17:43:35.277389 | orchestrator | Friday 29 August 2025 17:42:49 +0000 (0:00:01.409) 0:00:39.816 ********* 2025-08-29 17:43:35.277505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:43:35.277532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:43:35.277554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:43:35.277565 | orchestrator | 2025-08-29 17:43:35.277576 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-08-29 17:43:35.277587 | orchestrator | Friday 29 August 2025 17:42:52 +0000 (0:00:02.474) 0:00:42.290 ********* 2025-08-29 17:43:35.277598 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 17:43:35.277609 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 17:43:35.277620 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 17:43:35.277631 | orchestrator | 2025-08-29 17:43:35.277642 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-08-29 17:43:35.277652 | orchestrator | Friday 29 August 2025 17:42:54 +0000 (0:00:01.757) 0:00:44.047 ********* 2025-08-29 17:43:35.277663 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:35.277674 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:43:35.277684 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:43:35.277695 | orchestrator | 2025-08-29 17:43:35.277706 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-08-29 17:43:35.277717 | orchestrator | Friday 29 August 2025 17:42:55 +0000 (0:00:01.385) 0:00:45.433 ********* 2025-08-29 17:43:35.277728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 17:43:35.277747 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:35.277764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 17:43:35.277775 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:35.277794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 17:43:35.277805 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:35.277816 | orchestrator | 2025-08-29 17:43:35.277827 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-08-29 17:43:35.277838 | orchestrator | Friday 29 August 2025 17:42:56 +0000 (0:00:01.298) 0:00:46.732 ********* 2025-08-29 17:43:35.277849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:43:35.277861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:43:35.277909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:43:35.277922 | orchestrator | 2025-08-29 17:43:35.277933 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-08-29 17:43:35.277944 | orchestrator | Friday 29 August 2025 17:42:58 +0000 (0:00:01.987) 0:00:48.720 ********* 2025-08-29 17:43:35.277954 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:35.277965 | orchestrator | 2025-08-29 17:43:35.277976 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-08-29 17:43:35.277987 | orchestrator | Friday 29 August 2025 17:43:01 +0000 (0:00:03.172) 0:00:51.892 ********* 2025-08-29 17:43:35.277997 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:35.278008 | orchestrator | 2025-08-29 17:43:35.278128 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-08-29 17:43:35.278144 | orchestrator | Friday 29 August 2025 17:43:04 +0000 (0:00:02.546) 0:00:54.439 ********* 2025-08-29 17:43:35.278154 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:35.278165 | orchestrator | 2025-08-29 17:43:35.278176 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 17:43:35.278187 | orchestrator | Friday 29 August 2025 17:43:18 +0000 (0:00:13.864) 0:01:08.303 ********* 2025-08-29 17:43:35.278198 | orchestrator | 2025-08-29 17:43:35.278209 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 17:43:35.278220 | orchestrator | Friday 29 August 2025 17:43:18 +0000 (0:00:00.215) 0:01:08.519 ********* 2025-08-29 17:43:35.278230 | orchestrator | 2025-08-29 17:43:35.278255 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 17:43:35.278274 | orchestrator | Friday 29 August 2025 17:43:18 +0000 (0:00:00.242) 0:01:08.761 ********* 2025-08-29 17:43:35.278292 | orchestrator | 2025-08-29 17:43:35.278312 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-08-29 17:43:35.278338 | orchestrator | Friday 29 August 2025 17:43:19 +0000 (0:00:00.215) 0:01:08.977 ********* 2025-08-29 17:43:35.278355 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:35.278372 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:43:35.278390 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:43:35.278407 | orchestrator | 2025-08-29 17:43:35.278424 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:43:35.278483 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:43:35.278504 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 17:43:35.278523 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 17:43:35.278551 | orchestrator | 2025-08-29 17:43:35.278563 | orchestrator | 2025-08-29 17:43:35.278574 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:43:35.278584 | orchestrator | Friday 29 August 2025 17:43:32 +0000 (0:00:13.189) 0:01:22.167 ********* 2025-08-29 17:43:35.278595 | orchestrator | =============================================================================== 2025-08-29 17:43:35.278606 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.86s 2025-08-29 17:43:35.278617 | orchestrator | placement : Restart placement-api container ---------------------------- 13.19s 2025-08-29 17:43:35.278627 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.32s 2025-08-29 17:43:35.278638 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.50s 2025-08-29 17:43:35.278649 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.77s 2025-08-29 17:43:35.278659 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.71s 2025-08-29 17:43:35.278670 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.28s 2025-08-29 17:43:35.278681 | orchestrator | placement : Creating placement databases -------------------------------- 3.17s 2025-08-29 17:43:35.278691 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 3.17s 2025-08-29 17:43:35.278702 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.11s 2025-08-29 17:43:35.278713 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.55s 2025-08-29 17:43:35.278724 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.47s 2025-08-29 17:43:35.278734 | orchestrator | placement : Check placement containers ---------------------------------- 1.99s 2025-08-29 17:43:35.278745 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.88s 2025-08-29 17:43:35.278756 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.76s 2025-08-29 17:43:35.278766 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.53s 2025-08-29 17:43:35.278777 | orchestrator | placement : include_tasks ----------------------------------------------- 1.44s 2025-08-29 17:43:35.278788 | orchestrator | placement : Copying over config.json files for services ----------------- 1.41s 2025-08-29 17:43:35.278798 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.39s 2025-08-29 17:43:35.278809 | orchestrator | placement : Set placement policy file ----------------------------------- 1.38s 2025-08-29 17:43:35.278827 | orchestrator | 2025-08-29 17:43:35 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:43:35.278846 | orchestrator | 2025-08-29 17:43:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:38.324920 | orchestrator | 2025-08-29 17:43:38 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:43:38.326527 | orchestrator | 2025-08-29 17:43:38 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:43:38.328732 | orchestrator | 2025-08-29 17:43:38 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:38.330325 | orchestrator | 2025-08-29 17:43:38 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state STARTED 2025-08-29 17:43:38.330390 | orchestrator | 2025-08-29 17:43:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:41.370606 | orchestrator | 2025-08-29 17:43:41 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:43:41.372412 | orchestrator | 2025-08-29 17:43:41 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:43:41.374805 | orchestrator | 2025-08-29 17:43:41 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:43:41.376014 | orchestrator | 2025-08-29 17:43:41 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:41.379918 | orchestrator | 2025-08-29 17:43:41 | INFO  | Task 33ef573d-0109-4a37-a3a6-50be27307acb is in state SUCCESS 2025-08-29 17:43:41.380190 | orchestrator | 2025-08-29 17:43:41.381772 | orchestrator | 2025-08-29 17:43:41.381800 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:43:41.381812 | orchestrator | 2025-08-29 17:43:41.381823 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:43:41.381834 | orchestrator | Friday 29 August 2025 17:38:31 +0000 (0:00:00.713) 0:00:00.713 ********* 2025-08-29 17:43:41.381845 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:43:41.381857 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:43:41.381867 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:43:41.381878 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:43:41.381888 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:43:41.381899 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:43:41.381910 | orchestrator | 2025-08-29 17:43:41.381921 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:43:41.381932 | orchestrator | Friday 29 August 2025 17:38:33 +0000 (0:00:02.100) 0:00:02.814 ********* 2025-08-29 17:43:41.381943 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-08-29 17:43:41.381954 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-08-29 17:43:41.381964 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-08-29 17:43:41.381975 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-08-29 17:43:41.381986 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-08-29 17:43:41.382006 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-08-29 17:43:41.382059 | orchestrator | 2025-08-29 17:43:41.382071 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-08-29 17:43:41.382082 | orchestrator | 2025-08-29 17:43:41.382093 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 17:43:41.382104 | orchestrator | Friday 29 August 2025 17:38:35 +0000 (0:00:01.228) 0:00:04.043 ********* 2025-08-29 17:43:41.382116 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:43:41.382128 | orchestrator | 2025-08-29 17:43:41.382139 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-08-29 17:43:41.382150 | orchestrator | Friday 29 August 2025 17:38:36 +0000 (0:00:01.381) 0:00:05.424 ********* 2025-08-29 17:43:41.382160 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:43:41.382171 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:43:41.382182 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:43:41.382192 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:43:41.382203 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:43:41.382213 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:43:41.382224 | orchestrator | 2025-08-29 17:43:41.382235 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-08-29 17:43:41.382245 | orchestrator | Friday 29 August 2025 17:38:37 +0000 (0:00:01.312) 0:00:06.737 ********* 2025-08-29 17:43:41.382256 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:43:41.382267 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:43:41.382277 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:43:41.382288 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:43:41.382298 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:43:41.382309 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:43:41.382319 | orchestrator | 2025-08-29 17:43:41.382332 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-08-29 17:43:41.382344 | orchestrator | Friday 29 August 2025 17:38:38 +0000 (0:00:01.169) 0:00:07.906 ********* 2025-08-29 17:43:41.382356 | orchestrator | ok: [testbed-node-0] => { 2025-08-29 17:43:41.382368 | orchestrator |  "changed": false, 2025-08-29 17:43:41.382380 | orchestrator |  "msg": "All assertions passed" 2025-08-29 17:43:41.382406 | orchestrator | } 2025-08-29 17:43:41.382419 | orchestrator | ok: [testbed-node-1] => { 2025-08-29 17:43:41.382431 | orchestrator |  "changed": false, 2025-08-29 17:43:41.382470 | orchestrator |  "msg": "All assertions passed" 2025-08-29 17:43:41.382482 | orchestrator | } 2025-08-29 17:43:41.382494 | orchestrator | ok: [testbed-node-2] => { 2025-08-29 17:43:41.382506 | orchestrator |  "changed": false, 2025-08-29 17:43:41.382518 | orchestrator |  "msg": "All assertions passed" 2025-08-29 17:43:41.382530 | orchestrator | } 2025-08-29 17:43:41.382542 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 17:43:41.382554 | orchestrator |  "changed": false, 2025-08-29 17:43:41.382579 | orchestrator |  "msg": "All assertions passed" 2025-08-29 17:43:41.382591 | orchestrator | } 2025-08-29 17:43:41.382603 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 17:43:41.382615 | orchestrator |  "changed": false, 2025-08-29 17:43:41.382627 | orchestrator |  "msg": "All assertions passed" 2025-08-29 17:43:41.382639 | orchestrator | } 2025-08-29 17:43:41.382651 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 17:43:41.382663 | orchestrator |  "changed": false, 2025-08-29 17:43:41.382675 | orchestrator |  "msg": "All assertions passed" 2025-08-29 17:43:41.382686 | orchestrator | } 2025-08-29 17:43:41.382697 | orchestrator | 2025-08-29 17:43:41.382707 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-08-29 17:43:41.382718 | orchestrator | Friday 29 August 2025 17:38:40 +0000 (0:00:01.071) 0:00:08.978 ********* 2025-08-29 17:43:41.382729 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.382740 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.382750 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.382761 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.382772 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.382782 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.382793 | orchestrator | 2025-08-29 17:43:41.382803 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-08-29 17:43:41.382814 | orchestrator | Friday 29 August 2025 17:38:40 +0000 (0:00:00.719) 0:00:09.697 ********* 2025-08-29 17:43:41.382825 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-08-29 17:43:41.382836 | orchestrator | 2025-08-29 17:43:41.382847 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-08-29 17:43:41.382857 | orchestrator | Friday 29 August 2025 17:38:44 +0000 (0:00:03.442) 0:00:13.140 ********* 2025-08-29 17:43:41.382868 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-08-29 17:43:41.382880 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-08-29 17:43:41.382891 | orchestrator | 2025-08-29 17:43:41.382914 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-08-29 17:43:41.382926 | orchestrator | Friday 29 August 2025 17:38:50 +0000 (0:00:06.429) 0:00:19.569 ********* 2025-08-29 17:43:41.382937 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 17:43:41.382948 | orchestrator | 2025-08-29 17:43:41.382958 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-08-29 17:43:41.382969 | orchestrator | Friday 29 August 2025 17:38:54 +0000 (0:00:03.483) 0:00:23.053 ********* 2025-08-29 17:43:41.382980 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 17:43:41.382991 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-08-29 17:43:41.383002 | orchestrator | 2025-08-29 17:43:41.383012 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-08-29 17:43:41.383023 | orchestrator | Friday 29 August 2025 17:38:57 +0000 (0:00:03.335) 0:00:26.389 ********* 2025-08-29 17:43:41.383034 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 17:43:41.383045 | orchestrator | 2025-08-29 17:43:41.383056 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-08-29 17:43:41.383066 | orchestrator | Friday 29 August 2025 17:39:00 +0000 (0:00:03.141) 0:00:29.530 ********* 2025-08-29 17:43:41.383124 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-08-29 17:43:41.383136 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-08-29 17:43:41.383146 | orchestrator | 2025-08-29 17:43:41.383157 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 17:43:41.383168 | orchestrator | Friday 29 August 2025 17:39:07 +0000 (0:00:07.240) 0:00:36.770 ********* 2025-08-29 17:43:41.383178 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.383189 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.383200 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.383211 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.383221 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.383232 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.383242 | orchestrator | 2025-08-29 17:43:41.383253 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-08-29 17:43:41.383264 | orchestrator | Friday 29 August 2025 17:39:08 +0000 (0:00:00.817) 0:00:37.588 ********* 2025-08-29 17:43:41.383275 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.383286 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.383296 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.383307 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.383317 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.383328 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.383339 | orchestrator | 2025-08-29 17:43:41.383349 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-08-29 17:43:41.383360 | orchestrator | Friday 29 August 2025 17:39:10 +0000 (0:00:02.109) 0:00:39.697 ********* 2025-08-29 17:43:41.383371 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:43:41.383382 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:43:41.383392 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:43:41.383403 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:43:41.383414 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:43:41.383424 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:43:41.383449 | orchestrator | 2025-08-29 17:43:41.383461 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-08-29 17:43:41.383471 | orchestrator | Friday 29 August 2025 17:39:11 +0000 (0:00:01.063) 0:00:40.761 ********* 2025-08-29 17:43:41.383482 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.383493 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.383503 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.383514 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.383525 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.383535 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.383546 | orchestrator | 2025-08-29 17:43:41.383556 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-08-29 17:43:41.383567 | orchestrator | Friday 29 August 2025 17:39:14 +0000 (0:00:02.609) 0:00:43.370 ********* 2025-08-29 17:43:41.383587 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 17:43:41.383610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.383630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.383642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.383654 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 17:43:41.383670 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 17:43:41.383682 | orchestrator | 2025-08-29 17:43:41.383693 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-08-29 17:43:41.383710 | orchestrator | Friday 29 August 2025 17:39:17 +0000 (0:00:03.487) 0:00:46.857 ********* 2025-08-29 17:43:41.383721 | orchestrator | [WARNING]: Skipped 2025-08-29 17:43:41.383733 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-08-29 17:43:41.383744 | orchestrator | due to this access issue: 2025-08-29 17:43:41.383755 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-08-29 17:43:41.383766 | orchestrator | a directory 2025-08-29 17:43:41.383777 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 17:43:41.383788 | orchestrator | 2025-08-29 17:43:41.383798 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 17:43:41.383815 | orchestrator | Friday 29 August 2025 17:39:18 +0000 (0:00:00.935) 0:00:47.793 ********* 2025-08-29 17:43:41.383826 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:43:41.383837 | orchestrator | 2025-08-29 17:43:41.383848 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-08-29 17:43:41.383859 | orchestrator | Friday 29 August 2025 17:39:20 +0000 (0:00:01.468) 0:00:49.262 ********* 2025-08-29 17:43:41.383871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.383883 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 17:43:41.383899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.383911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.383936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 17:43:41.383948 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 17:43:41.383960 | orchestrator | 2025-08-29 17:43:41.383971 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-08-29 17:43:41.383982 | orchestrator | Friday 29 August 2025 17:39:24 +0000 (0:00:03.749) 0:00:53.011 ********* 2025-08-29 17:43:41.383993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.384009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.384027 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.384038 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.384049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.384065 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.384077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.384088 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.384099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.384111 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.384122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.384133 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.384144 | orchestrator | 2025-08-29 17:43:41.384155 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-08-29 17:43:41.384171 | orchestrator | Friday 29 August 2025 17:39:27 +0000 (0:00:03.239) 0:00:56.251 ********* 2025-08-29 17:43:41.384187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.384199 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.384216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.384228 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.384239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.384250 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.384262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.384273 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.384294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.384312 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.384323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.384335 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.384345 | orchestrator | 2025-08-29 17:43:41.384356 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-08-29 17:43:41.384367 | orchestrator | Friday 29 August 2025 17:39:32 +0000 (0:00:04.926) 0:01:01.177 ********* 2025-08-29 17:43:41.384378 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.384388 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.384399 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.384410 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.384420 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.384431 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.384458 | orchestrator | 2025-08-29 17:43:41.384470 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-08-29 17:43:41.384486 | orchestrator | Friday 29 August 2025 17:39:36 +0000 (0:00:04.244) 0:01:05.421 ********* 2025-08-29 17:43:41.384497 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.384508 | orchestrator | 2025-08-29 17:43:41.384519 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-08-29 17:43:41.384529 | orchestrator | Friday 29 August 2025 17:39:36 +0000 (0:00:00.269) 0:01:05.690 ********* 2025-08-29 17:43:41.384540 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.384551 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.384562 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.384572 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.384583 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.384593 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.384604 | orchestrator | 2025-08-29 17:43:41.384614 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-08-29 17:43:41.384625 | orchestrator | Friday 29 August 2025 17:39:37 +0000 (0:00:00.835) 0:01:06.526 ********* 2025-08-29 17:43:41.384636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.384655 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.384666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.384683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.384695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.385048 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.385063 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.385074 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.385085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.385097 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.385108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.385126 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.385137 | orchestrator | 2025-08-29 17:43:41.385148 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-08-29 17:43:41.385159 | orchestrator | Friday 29 August 2025 17:39:41 +0000 (0:00:03.822) 0:01:10.349 ********* 2025-08-29 17:43:41.385170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.385186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.385206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.385218 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 17:43:41.385236 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 17:43:41.385247 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 17:43:41.385259 | orchestrator | 2025-08-29 17:43:41.385270 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-08-29 17:43:41.385285 | orchestrator | Friday 29 August 2025 17:39:47 +0000 (0:00:05.613) 0:01:15.962 ********* 2025-08-29 17:43:41.385297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.385314 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 17:43:41.385326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 17:43:41.385347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.385363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.385375 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 17:43:41.385386 | orchestrator | 2025-08-29 17:43:41.385397 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-08-29 17:43:41.385408 | orchestrator | Friday 29 August 2025 17:39:57 +0000 (0:00:10.189) 0:01:26.151 ********* 2025-08-29 17:43:41.385425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.385473 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.385486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.385497 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.385508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.385519 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.385535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.385546 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.385558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.385569 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.385587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.385604 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.385615 | orchestrator | 2025-08-29 17:43:41.385626 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-08-29 17:43:41.385637 | orchestrator | Friday 29 August 2025 17:40:00 +0000 (0:00:03.454) 0:01:29.606 ********* 2025-08-29 17:43:41.385648 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.385660 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.385671 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:43:41.385683 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:41.385695 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.385707 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:43:41.385718 | orchestrator | 2025-08-29 17:43:41.385731 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-08-29 17:43:41.385742 | orchestrator | Friday 29 August 2025 17:40:05 +0000 (0:00:04.445) 0:01:34.051 ********* 2025-08-29 17:43:41.385755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.385767 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.385784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.385797 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.385810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.385828 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.385847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.385860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.385874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.385886 | orchestrator | 2025-08-29 17:43:41.385898 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-08-29 17:43:41.385910 | orchestrator | Friday 29 August 2025 17:40:10 +0000 (0:00:05.521) 0:01:39.573 ********* 2025-08-29 17:43:41.385922 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.385934 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.385946 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.385958 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.385970 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.385981 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.385993 | orchestrator | 2025-08-29 17:43:41.386005 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-08-29 17:43:41.386051 | orchestrator | Friday 29 August 2025 17:40:14 +0000 (0:00:04.322) 0:01:43.896 ********* 2025-08-29 17:43:41.386065 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.386076 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.386086 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.386097 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.386107 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.386118 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.386135 | orchestrator | 2025-08-29 17:43:41.386145 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-08-29 17:43:41.386156 | orchestrator | Friday 29 August 2025 17:40:18 +0000 (0:00:03.912) 0:01:47.808 ********* 2025-08-29 17:43:41.386167 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.386177 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.386188 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.386199 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.386209 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.386220 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.386230 | orchestrator | 2025-08-29 17:43:41.386241 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-08-29 17:43:41.386252 | orchestrator | Friday 29 August 2025 17:40:21 +0000 (0:00:02.524) 0:01:50.333 ********* 2025-08-29 17:43:41.386262 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.386273 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.386284 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.386294 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.386305 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.386315 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.386326 | orchestrator | 2025-08-29 17:43:41.386337 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-08-29 17:43:41.386347 | orchestrator | Friday 29 August 2025 17:40:23 +0000 (0:00:02.559) 0:01:52.892 ********* 2025-08-29 17:43:41.386358 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.386369 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.386380 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.386390 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.386407 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.386418 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.386428 | orchestrator | 2025-08-29 17:43:41.386456 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-08-29 17:43:41.386468 | orchestrator | Friday 29 August 2025 17:40:26 +0000 (0:00:02.573) 0:01:55.465 ********* 2025-08-29 17:43:41.386478 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.386489 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.386500 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.386510 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.386521 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.386531 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.386542 | orchestrator | 2025-08-29 17:43:41.386553 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-08-29 17:43:41.386563 | orchestrator | Friday 29 August 2025 17:40:29 +0000 (0:00:02.786) 0:01:58.252 ********* 2025-08-29 17:43:41.386574 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 17:43:41.386585 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.386595 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 17:43:41.386606 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.386617 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 17:43:41.386627 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.386638 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 17:43:41.386649 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.386659 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 17:43:41.386670 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.386681 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 17:43:41.386692 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.386702 | orchestrator | 2025-08-29 17:43:41.386719 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-08-29 17:43:41.386730 | orchestrator | Friday 29 August 2025 17:40:32 +0000 (0:00:03.002) 0:02:01.255 ********* 2025-08-29 17:43:41.386741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.386753 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.386769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.386780 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.386797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.386809 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.386820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.386831 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.386842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.386859 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.386870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.386881 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.386892 | orchestrator | 2025-08-29 17:43:41.386907 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-08-29 17:43:41.386918 | orchestrator | Friday 29 August 2025 17:40:36 +0000 (0:00:03.888) 0:02:05.143 ********* 2025-08-29 17:43:41.386929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.386940 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.386958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.386969 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.386980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.386999 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.387010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.387021 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.387036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.387048 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.387059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.387070 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.387081 | orchestrator | 2025-08-29 17:43:41.387091 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-08-29 17:43:41.387102 | orchestrator | Friday 29 August 2025 17:40:40 +0000 (0:00:04.544) 0:02:09.687 ********* 2025-08-29 17:43:41.387113 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.387128 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.387139 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.387150 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.387160 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.387171 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.387182 | orchestrator | 2025-08-29 17:43:41.387192 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-08-29 17:43:41.387203 | orchestrator | Friday 29 August 2025 17:40:44 +0000 (0:00:03.725) 0:02:13.413 ********* 2025-08-29 17:43:41.387220 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.387231 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.387241 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.387252 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:43:41.387263 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:43:41.387273 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:43:41.387284 | orchestrator | 2025-08-29 17:43:41.387294 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-08-29 17:43:41.387305 | orchestrator | Friday 29 August 2025 17:40:52 +0000 (0:00:08.109) 0:02:21.522 ********* 2025-08-29 17:43:41.387316 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.387326 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.387337 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.387347 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.387358 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.387368 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.387379 | orchestrator | 2025-08-29 17:43:41.387390 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-08-29 17:43:41.387401 | orchestrator | Friday 29 August 2025 17:40:57 +0000 (0:00:05.057) 0:02:26.580 ********* 2025-08-29 17:43:41.387411 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.387422 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.387433 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.387458 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.387468 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.387479 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.387490 | orchestrator | 2025-08-29 17:43:41.387500 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-08-29 17:43:41.387511 | orchestrator | Friday 29 August 2025 17:41:03 +0000 (0:00:05.818) 0:02:32.398 ********* 2025-08-29 17:43:41.387522 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.387532 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.387543 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.387553 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.387564 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.387575 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.387585 | orchestrator | 2025-08-29 17:43:41.387596 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-08-29 17:43:41.387607 | orchestrator | Friday 29 August 2025 17:41:07 +0000 (0:00:03.629) 0:02:36.027 ********* 2025-08-29 17:43:41.387617 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.387628 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.387639 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.387649 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.387659 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.387670 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.387680 | orchestrator | 2025-08-29 17:43:41.387691 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-08-29 17:43:41.387702 | orchestrator | Friday 29 August 2025 17:41:11 +0000 (0:00:04.655) 0:02:40.683 ********* 2025-08-29 17:43:41.387712 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.387723 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.387733 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.387744 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.387754 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.387765 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.387775 | orchestrator | 2025-08-29 17:43:41.387786 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-08-29 17:43:41.387797 | orchestrator | Friday 29 August 2025 17:41:17 +0000 (0:00:05.491) 0:02:46.175 ********* 2025-08-29 17:43:41.387808 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.387823 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.387841 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.387852 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.387862 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.387873 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.387884 | orchestrator | 2025-08-29 17:43:41.387894 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-08-29 17:43:41.387905 | orchestrator | Friday 29 August 2025 17:41:22 +0000 (0:00:05.445) 0:02:51.620 ********* 2025-08-29 17:43:41.387915 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.387926 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.387936 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.387946 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.387957 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.387968 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.387979 | orchestrator | 2025-08-29 17:43:41.387989 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-08-29 17:43:41.388000 | orchestrator | Friday 29 August 2025 17:41:26 +0000 (0:00:04.090) 0:02:55.710 ********* 2025-08-29 17:43:41.388011 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 17:43:41.388022 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.388032 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 17:43:41.388043 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.388054 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 17:43:41.388065 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.388075 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 17:43:41.388086 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.388102 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 17:43:41.388113 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.388124 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 17:43:41.388134 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.388145 | orchestrator | 2025-08-29 17:43:41.388156 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-08-29 17:43:41.388166 | orchestrator | Friday 29 August 2025 17:41:31 +0000 (0:00:04.943) 0:03:00.654 ********* 2025-08-29 17:43:41.388178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.388189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.388206 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.388217 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.388233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.388244 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.388255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.388267 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.388283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:43:41.388295 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.388306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:43:41.388317 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.388328 | orchestrator | 2025-08-29 17:43:41.388345 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-08-29 17:43:41.388356 | orchestrator | Friday 29 August 2025 17:41:36 +0000 (0:00:04.821) 0:03:05.475 ********* 2025-08-29 17:43:41.388367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.388383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.388400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 17:43:41.388412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:43:41.388423 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 17:43:41.388488 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 17:43:41.388501 | orchestrator | 2025-08-29 17:43:41.388512 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 17:43:41.388523 | orchestrator | Friday 29 August 2025 17:41:42 +0000 (0:00:06.010) 0:03:11.486 ********* 2025-08-29 17:43:41.388534 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:41.388544 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:41.388555 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:41.388565 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:41.388581 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:43:41.388592 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:43:41.388603 | orchestrator | 2025-08-29 17:43:41.388613 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-08-29 17:43:41.388624 | orchestrator | Friday 29 August 2025 17:41:43 +0000 (0:00:00.803) 0:03:12.290 ********* 2025-08-29 17:43:41.388635 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:41.388645 | orchestrator | 2025-08-29 17:43:41.388656 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-08-29 17:43:41.388667 | orchestrator | Friday 29 August 2025 17:41:45 +0000 (0:00:02.205) 0:03:14.495 ********* 2025-08-29 17:43:41.388677 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:41.388688 | orchestrator | 2025-08-29 17:43:41.388699 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-08-29 17:43:41.388708 | orchestrator | Friday 29 August 2025 17:41:47 +0000 (0:00:02.330) 0:03:16.826 ********* 2025-08-29 17:43:41.388718 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:41.388727 | orchestrator | 2025-08-29 17:43:41.388737 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 17:43:41.388746 | orchestrator | Friday 29 August 2025 17:42:34 +0000 (0:00:46.588) 0:04:03.415 ********* 2025-08-29 17:43:41.388755 | orchestrator | 2025-08-29 17:43:41.388765 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 17:43:41.388774 | orchestrator | Friday 29 August 2025 17:42:34 +0000 (0:00:00.092) 0:04:03.507 ********* 2025-08-29 17:43:41.388784 | orchestrator | 2025-08-29 17:43:41.388793 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 17:43:41.388802 | orchestrator | Friday 29 August 2025 17:42:34 +0000 (0:00:00.315) 0:04:03.822 ********* 2025-08-29 17:43:41.388812 | orchestrator | 2025-08-29 17:43:41.388821 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 17:43:41.388831 | orchestrator | Friday 29 August 2025 17:42:34 +0000 (0:00:00.073) 0:04:03.896 ********* 2025-08-29 17:43:41.388840 | orchestrator | 2025-08-29 17:43:41.388855 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 17:43:41.388864 | orchestrator | Friday 29 August 2025 17:42:35 +0000 (0:00:00.084) 0:04:03.980 ********* 2025-08-29 17:43:41.388874 | orchestrator | 2025-08-29 17:43:41.388883 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 17:43:41.388898 | orchestrator | Friday 29 August 2025 17:42:35 +0000 (0:00:00.075) 0:04:04.055 ********* 2025-08-29 17:43:41.388908 | orchestrator | 2025-08-29 17:43:41.388917 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-08-29 17:43:41.388927 | orchestrator | Friday 29 August 2025 17:42:35 +0000 (0:00:00.074) 0:04:04.130 ********* 2025-08-29 17:43:41.388936 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:41.388946 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:43:41.388955 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:43:41.388965 | orchestrator | 2025-08-29 17:43:41.388974 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-08-29 17:43:41.388984 | orchestrator | Friday 29 August 2025 17:43:10 +0000 (0:00:35.487) 0:04:39.618 ********* 2025-08-29 17:43:41.388993 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:43:41.389003 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:43:41.389012 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:43:41.389022 | orchestrator | 2025-08-29 17:43:41.389031 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:43:41.389041 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 17:43:41.389051 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-08-29 17:43:41.389061 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-08-29 17:43:41.389070 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 17:43:41.389080 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 17:43:41.389089 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 17:43:41.389099 | orchestrator | 2025-08-29 17:43:41.389108 | orchestrator | 2025-08-29 17:43:41.389118 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:43:41.389128 | orchestrator | Friday 29 August 2025 17:43:39 +0000 (0:00:28.774) 0:05:08.393 ********* 2025-08-29 17:43:41.389137 | orchestrator | =============================================================================== 2025-08-29 17:43:41.389147 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 46.59s 2025-08-29 17:43:41.389156 | orchestrator | neutron : Restart neutron-server container ----------------------------- 35.49s 2025-08-29 17:43:41.389165 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 28.77s 2025-08-29 17:43:41.389175 | orchestrator | neutron : Copying over neutron.conf ------------------------------------ 10.19s 2025-08-29 17:43:41.389184 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 8.11s 2025-08-29 17:43:41.389193 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.24s 2025-08-29 17:43:41.389203 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.43s 2025-08-29 17:43:41.389216 | orchestrator | neutron : Check neutron containers -------------------------------------- 6.01s 2025-08-29 17:43:41.389226 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 5.82s 2025-08-29 17:43:41.389235 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.61s 2025-08-29 17:43:41.389244 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.52s 2025-08-29 17:43:41.389254 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 5.49s 2025-08-29 17:43:41.389263 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 5.45s 2025-08-29 17:43:41.389278 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 5.06s 2025-08-29 17:43:41.389287 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 4.94s 2025-08-29 17:43:41.389297 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.93s 2025-08-29 17:43:41.389306 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 4.82s 2025-08-29 17:43:41.389316 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 4.66s 2025-08-29 17:43:41.389325 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 4.54s 2025-08-29 17:43:41.389334 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.45s 2025-08-29 17:43:41.389344 | orchestrator | 2025-08-29 17:43:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:44.430719 | orchestrator | 2025-08-29 17:43:44 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state STARTED 2025-08-29 17:43:44.432774 | orchestrator | 2025-08-29 17:43:44 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:43:44.435398 | orchestrator | 2025-08-29 17:43:44 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:43:44.437989 | orchestrator | 2025-08-29 17:43:44 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:44.438692 | orchestrator | 2025-08-29 17:43:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:47.471627 | orchestrator | 2025-08-29 17:43:47.471723 | orchestrator | 2025-08-29 17:43:47.471738 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:43:47.471751 | orchestrator | 2025-08-29 17:43:47.471763 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:43:47.471774 | orchestrator | Friday 29 August 2025 17:40:10 +0000 (0:00:00.362) 0:00:00.362 ********* 2025-08-29 17:43:47.471786 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:43:47.471798 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:43:47.471809 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:43:47.471820 | orchestrator | 2025-08-29 17:43:47.471831 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:43:47.471842 | orchestrator | Friday 29 August 2025 17:40:10 +0000 (0:00:00.416) 0:00:00.779 ********* 2025-08-29 17:43:47.471853 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-08-29 17:43:47.471864 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-08-29 17:43:47.471875 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-08-29 17:43:47.471885 | orchestrator | 2025-08-29 17:43:47.471896 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-08-29 17:43:47.471907 | orchestrator | 2025-08-29 17:43:47.471918 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 17:43:47.471929 | orchestrator | Friday 29 August 2025 17:40:11 +0000 (0:00:00.509) 0:00:01.288 ********* 2025-08-29 17:43:47.471940 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:43:47.471951 | orchestrator | 2025-08-29 17:43:47.471962 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-08-29 17:43:47.471973 | orchestrator | Friday 29 August 2025 17:40:12 +0000 (0:00:00.931) 0:00:02.220 ********* 2025-08-29 17:43:47.471983 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-08-29 17:43:47.471994 | orchestrator | 2025-08-29 17:43:47.472005 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-08-29 17:43:47.472016 | orchestrator | Friday 29 August 2025 17:40:15 +0000 (0:00:03.729) 0:00:05.949 ********* 2025-08-29 17:43:47.472027 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-08-29 17:43:47.472068 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-08-29 17:43:47.472088 | orchestrator | 2025-08-29 17:43:47.472107 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-08-29 17:43:47.472126 | orchestrator | Friday 29 August 2025 17:40:22 +0000 (0:00:06.559) 0:00:12.509 ********* 2025-08-29 17:43:47.472148 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 17:43:47.472167 | orchestrator | 2025-08-29 17:43:47.472189 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-08-29 17:43:47.472210 | orchestrator | Friday 29 August 2025 17:40:25 +0000 (0:00:03.242) 0:00:15.752 ********* 2025-08-29 17:43:47.472223 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 17:43:47.472236 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-08-29 17:43:47.472248 | orchestrator | 2025-08-29 17:43:47.472260 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-08-29 17:43:47.472272 | orchestrator | Friday 29 August 2025 17:40:29 +0000 (0:00:03.861) 0:00:19.614 ********* 2025-08-29 17:43:47.472284 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 17:43:47.472296 | orchestrator | 2025-08-29 17:43:47.472321 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-08-29 17:43:47.472334 | orchestrator | Friday 29 August 2025 17:40:32 +0000 (0:00:03.205) 0:00:22.819 ********* 2025-08-29 17:43:47.472346 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-08-29 17:43:47.472358 | orchestrator | 2025-08-29 17:43:47.472370 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-08-29 17:43:47.472382 | orchestrator | Friday 29 August 2025 17:40:37 +0000 (0:00:04.683) 0:00:27.503 ********* 2025-08-29 17:43:47.472397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:43:47.472430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:43:47.472470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:43:47.472537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.472557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.472570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.472582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.472646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.472660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.472678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.472690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.472706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.472718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.472729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.472747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.472759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.472778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.472790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.472801 | orchestrator | 2025-08-29 17:43:47.472813 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-08-29 17:43:47.472824 | orchestrator | Friday 29 August 2025 17:40:42 +0000 (0:00:04.790) 0:00:32.293 ********* 2025-08-29 17:43:47.472835 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:47.472846 | orchestrator | 2025-08-29 17:43:47.472857 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-08-29 17:43:47.472872 | orchestrator | Friday 29 August 2025 17:40:42 +0000 (0:00:00.299) 0:00:32.592 ********* 2025-08-29 17:43:47.472883 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:47.472894 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:47.472905 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:47.472915 | orchestrator | 2025-08-29 17:43:47.472926 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 17:43:47.472937 | orchestrator | Friday 29 August 2025 17:40:43 +0000 (0:00:00.587) 0:00:33.180 ********* 2025-08-29 17:43:47.472948 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:43:47.472959 | orchestrator | 2025-08-29 17:43:47.472970 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-08-29 17:43:47.472980 | orchestrator | Friday 29 August 2025 17:40:46 +0000 (0:00:03.095) 0:00:36.275 ********* 2025-08-29 17:43:47.472992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:43:47.473024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:43:47.473043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:43:47.473058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.473084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.473104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.473123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.473218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.473238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.473249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.473261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.473278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.473290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.473333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.473354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.473365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.473376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.473388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.473399 | orchestrator | 2025-08-29 17:43:47.473414 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-08-29 17:43:47.473426 | orchestrator | Friday 29 August 2025 17:40:55 +0000 (0:00:09.525) 0:00:45.801 ********* 2025-08-29 17:43:47.473471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:43:47.473535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:43:47.473549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:43:47.473560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:43:47.473572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.473588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.473600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.473618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.473661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:202025-08-29 17:43:47 | INFO  | Task cd28395a-4c14-4681-aa65-d62982fbff5f is in state SUCCESS 2025-08-29 17:43:47.473675 | orchestrator | 24.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.473688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.473700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.473711 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:47.473723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.473734 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:47.473750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:43:47.473770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:43:47.473812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.473826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.473837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.473849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.473860 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:47.473871 | orchestrator | 2025-08-29 17:43:47.473882 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-08-29 17:43:47.473893 | orchestrator | Friday 29 August 2025 17:40:58 +0000 (0:00:02.933) 0:00:48.734 ********* 2025-08-29 17:43:47.473908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:43:47.473926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:43:47.473969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.473982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.473993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.474005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.474065 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:47.474097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:43:47.474131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:43:47.474194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.474208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.474219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.474231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.474242 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:47.474259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:43:47.474278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:43:47.474317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.474330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.474342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.474353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.474364 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:47.474375 | orchestrator | 2025-08-29 17:43:47.474386 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-08-29 17:43:47.474398 | orchestrator | Friday 29 August 2025 17:41:02 +0000 (0:00:03.946) 0:00:52.680 ********* 2025-08-29 17:43:47.474421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:43:47.474433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:43:47.474536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:43:47.474550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474813 | orchestrator | 2025-08-29 17:43:47.474824 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-08-29 17:43:47.474842 | orchestrator | Friday 29 August 2025 17:41:11 +0000 (0:00:08.359) 0:01:01.039 ********* 2025-08-29 17:43:47.474853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:43:47.474869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:43:47.474881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:43:47.474923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.474994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475153 | orchestrator | 2025-08-29 17:43:47.475169 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-08-29 17:43:47.475179 | orchestrator | Friday 29 August 2025 17:41:43 +0000 (0:00:32.019) 0:01:33.059 ********* 2025-08-29 17:43:47.475189 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 17:43:47.475198 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 17:43:47.475208 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 17:43:47.475217 | orchestrator | 2025-08-29 17:43:47.475227 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-08-29 17:43:47.475237 | orchestrator | Friday 29 August 2025 17:41:48 +0000 (0:00:05.501) 0:01:38.560 ********* 2025-08-29 17:43:47.475246 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 17:43:47.475256 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 17:43:47.475265 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 17:43:47.475275 | orchestrator | 2025-08-29 17:43:47.475284 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-08-29 17:43:47.475294 | orchestrator | Friday 29 August 2025 17:41:52 +0000 (0:00:04.066) 0:01:42.626 ********* 2025-08-29 17:43:47.475308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:43:47.475319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:43:47.475336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:43:47.475351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475549 | orchestrator | 2025-08-29 17:43:47.475559 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-08-29 17:43:47.475569 | orchestrator | Friday 29 August 2025 17:41:55 +0000 (0:00:03.291) 0:01:45.919 ********* 2025-08-29 17:43:47.475579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:43:47.475596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:43:47.475607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:43:47.475628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.475808 | orchestrator | 2025-08-29 17:43:47.475818 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 17:43:47.475828 | orchestrator | Friday 29 August 2025 17:41:59 +0000 (0:00:03.995) 0:01:49.915 ********* 2025-08-29 17:43:47.475837 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:47.475847 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:47.475857 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:47.475866 | orchestrator | 2025-08-29 17:43:47.475876 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-08-29 17:43:47.475886 | orchestrator | Friday 29 August 2025 17:42:00 +0000 (0:00:00.349) 0:01:50.264 ********* 2025-08-29 17:43:47.475895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:43:47.475910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:43:47.475920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.475971 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:47.475981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:43:47.475995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:43:47.476005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.476021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.476037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.476047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.476057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:43:47.476067 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:47.476081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:43:47.476091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.476107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.476121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.476131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:43:47.476141 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:47.476151 | orchestrator | 2025-08-29 17:43:47.476161 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-08-29 17:43:47.476171 | orchestrator | Friday 29 August 2025 17:42:01 +0000 (0:00:01.222) 0:01:51.487 ********* 2025-08-29 17:43:47.476180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:43:47.476194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:43:47.476210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:43:47.476224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.476235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.476245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 17:43:47.476255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.476269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.476287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.476297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.476312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.476322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.476333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.476342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.476356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.476371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.476381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.476397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:43:47.476408 | orchestrator | 2025-08-29 17:43:47.476417 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 17:43:47.476427 | orchestrator | Friday 29 August 2025 17:42:06 +0000 (0:00:04.903) 0:01:56.390 ********* 2025-08-29 17:43:47.476449 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:43:47.476460 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:43:47.476469 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:43:47.476479 | orchestrator | 2025-08-29 17:43:47.476489 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-08-29 17:43:47.476498 | orchestrator | Friday 29 August 2025 17:42:06 +0000 (0:00:00.380) 0:01:56.770 ********* 2025-08-29 17:43:47.476508 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-08-29 17:43:47.476517 | orchestrator | 2025-08-29 17:43:47.476527 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-08-29 17:43:47.476537 | orchestrator | Friday 29 August 2025 17:42:08 +0000 (0:00:02.090) 0:01:58.861 ********* 2025-08-29 17:43:47.476546 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 17:43:47.476556 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-08-29 17:43:47.476565 | orchestrator | 2025-08-29 17:43:47.476575 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-08-29 17:43:47.476584 | orchestrator | Friday 29 August 2025 17:42:11 +0000 (0:00:02.203) 0:02:01.064 ********* 2025-08-29 17:43:47.476594 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:47.476604 | orchestrator | 2025-08-29 17:43:47.476613 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 17:43:47.476629 | orchestrator | Friday 29 August 2025 17:42:30 +0000 (0:00:19.126) 0:02:20.190 ********* 2025-08-29 17:43:47.476638 | orchestrator | 2025-08-29 17:43:47.476648 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 17:43:47.476657 | orchestrator | Friday 29 August 2025 17:42:30 +0000 (0:00:00.335) 0:02:20.526 ********* 2025-08-29 17:43:47.476667 | orchestrator | 2025-08-29 17:43:47.476677 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 17:43:47.476686 | orchestrator | Friday 29 August 2025 17:42:30 +0000 (0:00:00.075) 0:02:20.601 ********* 2025-08-29 17:43:47.476696 | orchestrator | 2025-08-29 17:43:47.476705 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-08-29 17:43:47.476715 | orchestrator | Friday 29 August 2025 17:42:30 +0000 (0:00:00.069) 0:02:20.671 ********* 2025-08-29 17:43:47.476724 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:47.476734 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:43:47.476743 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:43:47.476753 | orchestrator | 2025-08-29 17:43:47.476762 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-08-29 17:43:47.476772 | orchestrator | Friday 29 August 2025 17:42:40 +0000 (0:00:09.346) 0:02:30.018 ********* 2025-08-29 17:43:47.476781 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:43:47.476791 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:47.476800 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:43:47.476810 | orchestrator | 2025-08-29 17:43:47.476819 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-08-29 17:43:47.476833 | orchestrator | Friday 29 August 2025 17:42:55 +0000 (0:00:15.516) 0:02:45.534 ********* 2025-08-29 17:43:47.476843 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:47.476852 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:43:47.476862 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:43:47.476871 | orchestrator | 2025-08-29 17:43:47.476881 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-08-29 17:43:47.476890 | orchestrator | Friday 29 August 2025 17:43:09 +0000 (0:00:14.206) 0:02:59.741 ********* 2025-08-29 17:43:47.476900 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:47.476909 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:43:47.476919 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:43:47.476928 | orchestrator | 2025-08-29 17:43:47.476938 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-08-29 17:43:47.476948 | orchestrator | Friday 29 August 2025 17:43:16 +0000 (0:00:07.105) 0:03:06.847 ********* 2025-08-29 17:43:47.476957 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:47.476967 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:43:47.476976 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:43:47.476986 | orchestrator | 2025-08-29 17:43:47.476995 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-08-29 17:43:47.477005 | orchestrator | Friday 29 August 2025 17:43:32 +0000 (0:00:15.243) 0:03:22.090 ********* 2025-08-29 17:43:47.477014 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:47.477024 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:43:47.477033 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:43:47.477043 | orchestrator | 2025-08-29 17:43:47.477052 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-08-29 17:43:47.477062 | orchestrator | Friday 29 August 2025 17:43:38 +0000 (0:00:06.096) 0:03:28.186 ********* 2025-08-29 17:43:47.477072 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:43:47.477081 | orchestrator | 2025-08-29 17:43:47.477091 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:43:47.477101 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 17:43:47.477111 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:43:47.477153 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:43:47.477164 | orchestrator | 2025-08-29 17:43:47.477174 | orchestrator | 2025-08-29 17:43:47.477184 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:43:47.477194 | orchestrator | Friday 29 August 2025 17:43:45 +0000 (0:00:07.482) 0:03:35.669 ********* 2025-08-29 17:43:47.477203 | orchestrator | =============================================================================== 2025-08-29 17:43:47.477213 | orchestrator | designate : Copying over designate.conf -------------------------------- 32.02s 2025-08-29 17:43:47.477222 | orchestrator | designate : Running Designate bootstrap container ---------------------- 19.13s 2025-08-29 17:43:47.477232 | orchestrator | designate : Restart designate-api container ---------------------------- 15.52s 2025-08-29 17:43:47.477241 | orchestrator | designate : Restart designate-mdns container --------------------------- 15.24s 2025-08-29 17:43:47.477251 | orchestrator | designate : Restart designate-central container ------------------------ 14.21s 2025-08-29 17:43:47.477261 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 9.53s 2025-08-29 17:43:47.477270 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.35s 2025-08-29 17:43:47.477280 | orchestrator | designate : Copying over config.json files for services ----------------- 8.36s 2025-08-29 17:43:47.477289 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.48s 2025-08-29 17:43:47.477299 | orchestrator | designate : Restart designate-producer container ------------------------ 7.11s 2025-08-29 17:43:47.477308 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.56s 2025-08-29 17:43:47.477318 | orchestrator | designate : Restart designate-worker container -------------------------- 6.10s 2025-08-29 17:43:47.477327 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.50s 2025-08-29 17:43:47.477337 | orchestrator | designate : Check designate containers ---------------------------------- 4.90s 2025-08-29 17:43:47.477346 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.79s 2025-08-29 17:43:47.477356 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.68s 2025-08-29 17:43:47.477365 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.07s 2025-08-29 17:43:47.477375 | orchestrator | designate : Copying over rndc.key --------------------------------------- 4.00s 2025-08-29 17:43:47.477384 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS key --- 3.95s 2025-08-29 17:43:47.477394 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.86s 2025-08-29 17:43:47.477403 | orchestrator | 2025-08-29 17:43:47 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:43:47.477413 | orchestrator | 2025-08-29 17:43:47 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:43:47.477423 | orchestrator | 2025-08-29 17:43:47 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:47.477433 | orchestrator | 2025-08-29 17:43:47 | INFO  | Task 2681cff6-00a8-4164-9ff0-2e328463a071 is in state STARTED 2025-08-29 17:43:47.477487 | orchestrator | 2025-08-29 17:43:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:50.502136 | orchestrator | 2025-08-29 17:43:50 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:43:50.504295 | orchestrator | 2025-08-29 17:43:50 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:43:50.506484 | orchestrator | 2025-08-29 17:43:50 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:50.508146 | orchestrator | 2025-08-29 17:43:50 | INFO  | Task 2681cff6-00a8-4164-9ff0-2e328463a071 is in state STARTED 2025-08-29 17:43:50.508730 | orchestrator | 2025-08-29 17:43:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:53.548657 | orchestrator | 2025-08-29 17:43:53 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:43:53.549923 | orchestrator | 2025-08-29 17:43:53 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:43:53.550977 | orchestrator | 2025-08-29 17:43:53 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:53.552383 | orchestrator | 2025-08-29 17:43:53 | INFO  | Task 2681cff6-00a8-4164-9ff0-2e328463a071 is in state STARTED 2025-08-29 17:43:53.552398 | orchestrator | 2025-08-29 17:43:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:56.607076 | orchestrator | 2025-08-29 17:43:56 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:43:56.609122 | orchestrator | 2025-08-29 17:43:56 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:43:56.612326 | orchestrator | 2025-08-29 17:43:56 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:56.613125 | orchestrator | 2025-08-29 17:43:56 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:43:56.613856 | orchestrator | 2025-08-29 17:43:56 | INFO  | Task 2681cff6-00a8-4164-9ff0-2e328463a071 is in state SUCCESS 2025-08-29 17:43:56.613899 | orchestrator | 2025-08-29 17:43:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:43:59.658542 | orchestrator | 2025-08-29 17:43:59 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:43:59.659554 | orchestrator | 2025-08-29 17:43:59 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:43:59.661036 | orchestrator | 2025-08-29 17:43:59 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:43:59.662607 | orchestrator | 2025-08-29 17:43:59 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:43:59.662927 | orchestrator | 2025-08-29 17:43:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:02.706835 | orchestrator | 2025-08-29 17:44:02 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:02.708596 | orchestrator | 2025-08-29 17:44:02 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:02.709809 | orchestrator | 2025-08-29 17:44:02 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:02.711931 | orchestrator | 2025-08-29 17:44:02 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:02.712394 | orchestrator | 2025-08-29 17:44:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:05.748600 | orchestrator | 2025-08-29 17:44:05 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:05.752080 | orchestrator | 2025-08-29 17:44:05 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:05.753660 | orchestrator | 2025-08-29 17:44:05 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:05.756547 | orchestrator | 2025-08-29 17:44:05 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:05.757158 | orchestrator | 2025-08-29 17:44:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:08.798100 | orchestrator | 2025-08-29 17:44:08 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:08.799805 | orchestrator | 2025-08-29 17:44:08 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:08.803032 | orchestrator | 2025-08-29 17:44:08 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:08.805984 | orchestrator | 2025-08-29 17:44:08 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:08.806112 | orchestrator | 2025-08-29 17:44:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:11.844846 | orchestrator | 2025-08-29 17:44:11 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:11.846524 | orchestrator | 2025-08-29 17:44:11 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:11.850467 | orchestrator | 2025-08-29 17:44:11 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:11.851404 | orchestrator | 2025-08-29 17:44:11 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:11.851433 | orchestrator | 2025-08-29 17:44:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:14.888595 | orchestrator | 2025-08-29 17:44:14 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:14.890206 | orchestrator | 2025-08-29 17:44:14 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:14.892939 | orchestrator | 2025-08-29 17:44:14 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:14.895587 | orchestrator | 2025-08-29 17:44:14 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:14.895637 | orchestrator | 2025-08-29 17:44:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:17.948713 | orchestrator | 2025-08-29 17:44:17 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:17.950772 | orchestrator | 2025-08-29 17:44:17 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:17.953234 | orchestrator | 2025-08-29 17:44:17 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:17.954319 | orchestrator | 2025-08-29 17:44:17 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:17.954707 | orchestrator | 2025-08-29 17:44:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:20.992513 | orchestrator | 2025-08-29 17:44:20 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:20.994800 | orchestrator | 2025-08-29 17:44:20 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:20.996947 | orchestrator | 2025-08-29 17:44:20 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:20.997821 | orchestrator | 2025-08-29 17:44:20 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:20.997858 | orchestrator | 2025-08-29 17:44:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:24.049982 | orchestrator | 2025-08-29 17:44:24 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:24.050553 | orchestrator | 2025-08-29 17:44:24 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:24.057951 | orchestrator | 2025-08-29 17:44:24 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:24.058048 | orchestrator | 2025-08-29 17:44:24 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:24.058056 | orchestrator | 2025-08-29 17:44:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:27.101186 | orchestrator | 2025-08-29 17:44:27 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:27.102143 | orchestrator | 2025-08-29 17:44:27 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:27.103620 | orchestrator | 2025-08-29 17:44:27 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:27.104816 | orchestrator | 2025-08-29 17:44:27 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:27.104982 | orchestrator | 2025-08-29 17:44:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:30.142908 | orchestrator | 2025-08-29 17:44:30 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:30.143022 | orchestrator | 2025-08-29 17:44:30 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:30.144645 | orchestrator | 2025-08-29 17:44:30 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:30.146740 | orchestrator | 2025-08-29 17:44:30 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:30.146851 | orchestrator | 2025-08-29 17:44:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:33.193831 | orchestrator | 2025-08-29 17:44:33 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:33.195569 | orchestrator | 2025-08-29 17:44:33 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:33.197311 | orchestrator | 2025-08-29 17:44:33 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:33.199679 | orchestrator | 2025-08-29 17:44:33 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:33.199800 | orchestrator | 2025-08-29 17:44:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:36.239554 | orchestrator | 2025-08-29 17:44:36 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:36.240285 | orchestrator | 2025-08-29 17:44:36 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:36.241319 | orchestrator | 2025-08-29 17:44:36 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:36.243229 | orchestrator | 2025-08-29 17:44:36 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:36.243256 | orchestrator | 2025-08-29 17:44:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:39.285771 | orchestrator | 2025-08-29 17:44:39 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:39.288209 | orchestrator | 2025-08-29 17:44:39 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:39.291232 | orchestrator | 2025-08-29 17:44:39 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:39.292887 | orchestrator | 2025-08-29 17:44:39 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:39.293100 | orchestrator | 2025-08-29 17:44:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:42.327345 | orchestrator | 2025-08-29 17:44:42 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:42.328098 | orchestrator | 2025-08-29 17:44:42 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:42.330549 | orchestrator | 2025-08-29 17:44:42 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:42.332640 | orchestrator | 2025-08-29 17:44:42 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:42.333297 | orchestrator | 2025-08-29 17:44:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:45.372801 | orchestrator | 2025-08-29 17:44:45 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:45.375146 | orchestrator | 2025-08-29 17:44:45 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:45.377080 | orchestrator | 2025-08-29 17:44:45 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:45.379746 | orchestrator | 2025-08-29 17:44:45 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:45.381195 | orchestrator | 2025-08-29 17:44:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:48.428175 | orchestrator | 2025-08-29 17:44:48 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:48.428864 | orchestrator | 2025-08-29 17:44:48 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:48.432210 | orchestrator | 2025-08-29 17:44:48 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:48.433280 | orchestrator | 2025-08-29 17:44:48 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:48.433390 | orchestrator | 2025-08-29 17:44:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:51.463012 | orchestrator | 2025-08-29 17:44:51 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:51.464534 | orchestrator | 2025-08-29 17:44:51 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:51.467696 | orchestrator | 2025-08-29 17:44:51 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:51.471871 | orchestrator | 2025-08-29 17:44:51 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:51.472382 | orchestrator | 2025-08-29 17:44:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:54.521699 | orchestrator | 2025-08-29 17:44:54 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:54.524487 | orchestrator | 2025-08-29 17:44:54 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:54.526543 | orchestrator | 2025-08-29 17:44:54 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:54.529025 | orchestrator | 2025-08-29 17:44:54 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:54.529062 | orchestrator | 2025-08-29 17:44:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:44:57.570214 | orchestrator | 2025-08-29 17:44:57 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:44:57.571049 | orchestrator | 2025-08-29 17:44:57 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:44:57.571971 | orchestrator | 2025-08-29 17:44:57 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:44:57.573233 | orchestrator | 2025-08-29 17:44:57 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:44:57.573259 | orchestrator | 2025-08-29 17:44:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:00.611282 | orchestrator | 2025-08-29 17:45:00 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:00.612297 | orchestrator | 2025-08-29 17:45:00 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:00.614122 | orchestrator | 2025-08-29 17:45:00 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:00.615682 | orchestrator | 2025-08-29 17:45:00 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:00.615711 | orchestrator | 2025-08-29 17:45:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:03.668301 | orchestrator | 2025-08-29 17:45:03 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:03.669675 | orchestrator | 2025-08-29 17:45:03 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:03.672411 | orchestrator | 2025-08-29 17:45:03 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:03.673925 | orchestrator | 2025-08-29 17:45:03 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:03.673951 | orchestrator | 2025-08-29 17:45:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:06.715080 | orchestrator | 2025-08-29 17:45:06 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:06.717386 | orchestrator | 2025-08-29 17:45:06 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:06.719142 | orchestrator | 2025-08-29 17:45:06 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:06.721033 | orchestrator | 2025-08-29 17:45:06 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:06.721364 | orchestrator | 2025-08-29 17:45:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:09.773400 | orchestrator | 2025-08-29 17:45:09 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:09.774258 | orchestrator | 2025-08-29 17:45:09 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:09.775378 | orchestrator | 2025-08-29 17:45:09 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:09.776829 | orchestrator | 2025-08-29 17:45:09 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:09.776869 | orchestrator | 2025-08-29 17:45:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:12.830357 | orchestrator | 2025-08-29 17:45:12 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:12.831964 | orchestrator | 2025-08-29 17:45:12 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:12.834146 | orchestrator | 2025-08-29 17:45:12 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:12.835780 | orchestrator | 2025-08-29 17:45:12 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:12.835840 | orchestrator | 2025-08-29 17:45:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:15.877347 | orchestrator | 2025-08-29 17:45:15 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:15.877454 | orchestrator | 2025-08-29 17:45:15 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:15.878291 | orchestrator | 2025-08-29 17:45:15 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:15.879221 | orchestrator | 2025-08-29 17:45:15 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:15.879249 | orchestrator | 2025-08-29 17:45:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:18.917826 | orchestrator | 2025-08-29 17:45:18 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:18.918127 | orchestrator | 2025-08-29 17:45:18 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:18.920538 | orchestrator | 2025-08-29 17:45:18 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:18.920620 | orchestrator | 2025-08-29 17:45:18 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:18.920636 | orchestrator | 2025-08-29 17:45:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:21.969422 | orchestrator | 2025-08-29 17:45:21 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:21.969743 | orchestrator | 2025-08-29 17:45:21 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:21.970502 | orchestrator | 2025-08-29 17:45:21 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:21.971263 | orchestrator | 2025-08-29 17:45:21 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:21.971326 | orchestrator | 2025-08-29 17:45:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:25.044069 | orchestrator | 2025-08-29 17:45:25 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:25.045734 | orchestrator | 2025-08-29 17:45:25 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:25.047585 | orchestrator | 2025-08-29 17:45:25 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:25.050061 | orchestrator | 2025-08-29 17:45:25 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:25.050102 | orchestrator | 2025-08-29 17:45:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:28.090757 | orchestrator | 2025-08-29 17:45:28 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:28.094272 | orchestrator | 2025-08-29 17:45:28 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:28.097664 | orchestrator | 2025-08-29 17:45:28 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:28.099929 | orchestrator | 2025-08-29 17:45:28 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:28.099987 | orchestrator | 2025-08-29 17:45:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:31.145252 | orchestrator | 2025-08-29 17:45:31 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:31.146195 | orchestrator | 2025-08-29 17:45:31 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:31.147334 | orchestrator | 2025-08-29 17:45:31 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:31.149102 | orchestrator | 2025-08-29 17:45:31 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:31.149145 | orchestrator | 2025-08-29 17:45:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:34.185622 | orchestrator | 2025-08-29 17:45:34 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:34.187773 | orchestrator | 2025-08-29 17:45:34 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:34.191153 | orchestrator | 2025-08-29 17:45:34 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:34.192887 | orchestrator | 2025-08-29 17:45:34 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:34.192962 | orchestrator | 2025-08-29 17:45:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:37.243575 | orchestrator | 2025-08-29 17:45:37 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:37.244888 | orchestrator | 2025-08-29 17:45:37 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:37.245999 | orchestrator | 2025-08-29 17:45:37 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:37.247035 | orchestrator | 2025-08-29 17:45:37 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:37.247046 | orchestrator | 2025-08-29 17:45:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:40.291384 | orchestrator | 2025-08-29 17:45:40 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:40.296101 | orchestrator | 2025-08-29 17:45:40 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:40.299786 | orchestrator | 2025-08-29 17:45:40 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:40.301825 | orchestrator | 2025-08-29 17:45:40 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:40.301870 | orchestrator | 2025-08-29 17:45:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:43.368877 | orchestrator | 2025-08-29 17:45:43 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:43.369140 | orchestrator | 2025-08-29 17:45:43 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:43.370313 | orchestrator | 2025-08-29 17:45:43 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:43.373219 | orchestrator | 2025-08-29 17:45:43 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:43.376167 | orchestrator | 2025-08-29 17:45:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:46.424311 | orchestrator | 2025-08-29 17:45:46 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:46.425199 | orchestrator | 2025-08-29 17:45:46 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:46.426134 | orchestrator | 2025-08-29 17:45:46 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:46.427152 | orchestrator | 2025-08-29 17:45:46 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:46.427175 | orchestrator | 2025-08-29 17:45:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:49.511021 | orchestrator | 2025-08-29 17:45:49 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:49.514284 | orchestrator | 2025-08-29 17:45:49 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:49.515406 | orchestrator | 2025-08-29 17:45:49 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:49.518886 | orchestrator | 2025-08-29 17:45:49 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:49.518930 | orchestrator | 2025-08-29 17:45:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:52.562303 | orchestrator | 2025-08-29 17:45:52 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:52.563551 | orchestrator | 2025-08-29 17:45:52 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state STARTED 2025-08-29 17:45:52.564746 | orchestrator | 2025-08-29 17:45:52 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:52.565727 | orchestrator | 2025-08-29 17:45:52 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:52.566618 | orchestrator | 2025-08-29 17:45:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:55.597674 | orchestrator | 2025-08-29 17:45:55 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:55.598537 | orchestrator | 2025-08-29 17:45:55 | INFO  | Task a4dc02b4-0100-448d-9a4d-01868e3be8a2 is in state SUCCESS 2025-08-29 17:45:55.601376 | orchestrator | 2025-08-29 17:45:55.601411 | orchestrator | 2025-08-29 17:45:55.601423 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:45:55.601435 | orchestrator | 2025-08-29 17:45:55.601446 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:45:55.601457 | orchestrator | Friday 29 August 2025 17:43:51 +0000 (0:00:00.170) 0:00:00.170 ********* 2025-08-29 17:45:55.601497 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:45:55.601509 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:45:55.601520 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:45:55.601531 | orchestrator | 2025-08-29 17:45:55.601542 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:45:55.601553 | orchestrator | Friday 29 August 2025 17:43:51 +0000 (0:00:00.322) 0:00:00.492 ********* 2025-08-29 17:45:55.601563 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-08-29 17:45:55.601593 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-08-29 17:45:55.601605 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-08-29 17:45:55.601616 | orchestrator | 2025-08-29 17:45:55.601627 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-08-29 17:45:55.601638 | orchestrator | 2025-08-29 17:45:55.601648 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-08-29 17:45:55.601659 | orchestrator | Friday 29 August 2025 17:43:52 +0000 (0:00:00.877) 0:00:01.369 ********* 2025-08-29 17:45:55.601670 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:45:55.601680 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:45:55.601691 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:45:55.601701 | orchestrator | 2025-08-29 17:45:55.601712 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:45:55.601724 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:45:55.601736 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:45:55.601747 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:45:55.601757 | orchestrator | 2025-08-29 17:45:55.601768 | orchestrator | 2025-08-29 17:45:55.601779 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:45:55.601790 | orchestrator | Friday 29 August 2025 17:43:53 +0000 (0:00:00.900) 0:00:02.270 ********* 2025-08-29 17:45:55.601867 | orchestrator | =============================================================================== 2025-08-29 17:45:55.601883 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.90s 2025-08-29 17:45:55.601894 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2025-08-29 17:45:55.601904 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-08-29 17:45:55.601915 | orchestrator | 2025-08-29 17:45:55.601925 | orchestrator | 2025-08-29 17:45:55.601936 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:45:55.601947 | orchestrator | 2025-08-29 17:45:55.601957 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:45:55.603124 | orchestrator | Friday 29 August 2025 17:43:36 +0000 (0:00:00.271) 0:00:00.271 ********* 2025-08-29 17:45:55.603151 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:45:55.603164 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:45:55.603175 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:45:55.603186 | orchestrator | 2025-08-29 17:45:55.603198 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:45:55.603210 | orchestrator | Friday 29 August 2025 17:43:36 +0000 (0:00:00.279) 0:00:00.551 ********* 2025-08-29 17:45:55.603222 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-08-29 17:45:55.603233 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-08-29 17:45:55.603244 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-08-29 17:45:55.603256 | orchestrator | 2025-08-29 17:45:55.603267 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-08-29 17:45:55.603278 | orchestrator | 2025-08-29 17:45:55.603290 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 17:45:55.603301 | orchestrator | Friday 29 August 2025 17:43:37 +0000 (0:00:00.454) 0:00:01.006 ********* 2025-08-29 17:45:55.603312 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:45:55.603324 | orchestrator | 2025-08-29 17:45:55.603335 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-08-29 17:45:55.603346 | orchestrator | Friday 29 August 2025 17:43:38 +0000 (0:00:00.596) 0:00:01.602 ********* 2025-08-29 17:45:55.603358 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-08-29 17:45:55.603369 | orchestrator | 2025-08-29 17:45:55.603381 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-08-29 17:45:55.603392 | orchestrator | Friday 29 August 2025 17:43:41 +0000 (0:00:03.618) 0:00:05.221 ********* 2025-08-29 17:45:55.603403 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-08-29 17:45:55.603414 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-08-29 17:45:55.603425 | orchestrator | 2025-08-29 17:45:55.603436 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-08-29 17:45:55.603447 | orchestrator | Friday 29 August 2025 17:43:48 +0000 (0:00:06.643) 0:00:11.865 ********* 2025-08-29 17:45:55.603458 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 17:45:55.603501 | orchestrator | 2025-08-29 17:45:55.603513 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-08-29 17:45:55.603524 | orchestrator | Friday 29 August 2025 17:43:51 +0000 (0:00:03.460) 0:00:15.325 ********* 2025-08-29 17:45:55.603550 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 17:45:55.603562 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-08-29 17:45:55.603572 | orchestrator | 2025-08-29 17:45:55.603583 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-08-29 17:45:55.603594 | orchestrator | Friday 29 August 2025 17:43:55 +0000 (0:00:04.175) 0:00:19.501 ********* 2025-08-29 17:45:55.603605 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 17:45:55.603616 | orchestrator | 2025-08-29 17:45:55.603626 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-08-29 17:45:55.603637 | orchestrator | Friday 29 August 2025 17:43:59 +0000 (0:00:03.260) 0:00:22.761 ********* 2025-08-29 17:45:55.603648 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-08-29 17:45:55.603659 | orchestrator | 2025-08-29 17:45:55.603678 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-08-29 17:45:55.603689 | orchestrator | Friday 29 August 2025 17:44:03 +0000 (0:00:04.272) 0:00:27.034 ********* 2025-08-29 17:45:55.603700 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:45:55.603711 | orchestrator | 2025-08-29 17:45:55.603723 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-08-29 17:45:55.603744 | orchestrator | Friday 29 August 2025 17:44:06 +0000 (0:00:03.338) 0:00:30.373 ********* 2025-08-29 17:45:55.603756 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:45:55.603768 | orchestrator | 2025-08-29 17:45:55.603779 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-08-29 17:45:55.603792 | orchestrator | Friday 29 August 2025 17:44:10 +0000 (0:00:03.861) 0:00:34.234 ********* 2025-08-29 17:45:55.603803 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:45:55.603815 | orchestrator | 2025-08-29 17:45:55.603827 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-08-29 17:45:55.603839 | orchestrator | Friday 29 August 2025 17:44:14 +0000 (0:00:03.589) 0:00:37.824 ********* 2025-08-29 17:45:55.603855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.604859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.604876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.605809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.605842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.605853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.605864 | orchestrator | 2025-08-29 17:45:55.605875 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-08-29 17:45:55.605886 | orchestrator | Friday 29 August 2025 17:44:15 +0000 (0:00:01.425) 0:00:39.250 ********* 2025-08-29 17:45:55.605897 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:45:55.605908 | orchestrator | 2025-08-29 17:45:55.605919 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-08-29 17:45:55.605928 | orchestrator | Friday 29 August 2025 17:44:15 +0000 (0:00:00.158) 0:00:39.408 ********* 2025-08-29 17:45:55.605938 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:45:55.605948 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:45:55.605957 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:45:55.605967 | orchestrator | 2025-08-29 17:45:55.605976 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-08-29 17:45:55.605986 | orchestrator | Friday 29 August 2025 17:44:16 +0000 (0:00:00.547) 0:00:39.955 ********* 2025-08-29 17:45:55.605996 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 17:45:55.606005 | orchestrator | 2025-08-29 17:45:55.606042 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-08-29 17:45:55.606052 | orchestrator | Friday 29 August 2025 17:44:17 +0000 (0:00:01.007) 0:00:40.963 ********* 2025-08-29 17:45:55.606062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.606084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.606140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.606151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.606162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.606172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.606189 | orchestrator | 2025-08-29 17:45:55.606199 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-08-29 17:45:55.606208 | orchestrator | Friday 29 August 2025 17:44:19 +0000 (0:00:02.547) 0:00:43.511 ********* 2025-08-29 17:45:55.606218 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:45:55.606228 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:45:55.606238 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:45:55.606247 | orchestrator | 2025-08-29 17:45:55.606257 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 17:45:55.606273 | orchestrator | Friday 29 August 2025 17:44:20 +0000 (0:00:00.376) 0:00:43.887 ********* 2025-08-29 17:45:55.606284 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:45:55.606293 | orchestrator | 2025-08-29 17:45:55.606303 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-08-29 17:45:55.606312 | orchestrator | Friday 29 August 2025 17:44:21 +0000 (0:00:00.864) 0:00:44.752 ********* 2025-08-29 17:45:55.606327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.606337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.606348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.606358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.606381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.606396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.606406 | orchestrator | 2025-08-29 17:45:55.606416 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-08-29 17:45:55.606426 | orchestrator | Friday 29 August 2025 17:44:23 +0000 (0:00:02.465) 0:00:47.218 ********* 2025-08-29 17:45:55.606436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 17:45:55.606446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:45:55.606456 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:45:55.606484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 17:45:55.606510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:45:55.606531 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:45:55.606541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 17:45:55.606551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:45:55.606561 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:45:55.606571 | orchestrator | 2025-08-29 17:45:55.606581 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-08-29 17:45:55.606590 | orchestrator | Friday 29 August 2025 17:44:24 +0000 (0:00:00.811) 0:00:48.029 ********* 2025-08-29 17:45:55.606600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 17:45:55.606621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:45:55.606631 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:45:55.606652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 17:45:55.606663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:45:55.606673 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:45:55.606683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 17:45:55.606699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:45:55.606710 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:45:55.606719 | orchestrator | 2025-08-29 17:45:55.606729 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-08-29 17:45:55.606738 | orchestrator | Friday 29 August 2025 17:44:25 +0000 (0:00:01.221) 0:00:49.250 ********* 2025-08-29 17:45:55.606754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.606769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.606780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.606791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.606806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.606823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.606833 | orchestrator | 2025-08-29 17:45:55.606843 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-08-29 17:45:55.606853 | orchestrator | Friday 29 August 2025 17:44:28 +0000 (0:00:02.524) 0:00:51.775 ********* 2025-08-29 17:45:55.606867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.606877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.606894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.606904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.606921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.606935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.606946 | orchestrator | 2025-08-29 17:45:55.606955 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-08-29 17:45:55.606965 | orchestrator | Friday 29 August 2025 17:44:33 +0000 (0:00:05.674) 0:00:57.449 ********* 2025-08-29 17:45:55.606975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 17:45:55.606990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 17:45:55.607001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:45:55.607017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:45:55.607027 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:45:55.607042 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:45:55.607052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 17:45:55.607062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:45:55.607078 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:45:55.607087 | orchestrator | 2025-08-29 17:45:55.607097 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-08-29 17:45:55.607107 | orchestrator | Friday 29 August 2025 17:44:34 +0000 (0:00:00.858) 0:00:58.308 ********* 2025-08-29 17:45:55.607116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.607133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.608012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:45:55.608032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.608050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.608060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:45:55.608069 | orchestrator | 2025-08-29 17:45:55.608078 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 17:45:55.608087 | orchestrator | Friday 29 August 2025 17:44:37 +0000 (0:00:02.569) 0:01:00.877 ********* 2025-08-29 17:45:55.608096 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:45:55.608105 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:45:55.608114 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:45:55.608123 | orchestrator | 2025-08-29 17:45:55.608131 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-08-29 17:45:55.608140 | orchestrator | Friday 29 August 2025 17:44:37 +0000 (0:00:00.334) 0:01:01.212 ********* 2025-08-29 17:45:55.608150 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:45:55.608158 | orchestrator | 2025-08-29 17:45:55.608167 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-08-29 17:45:55.608176 | orchestrator | Friday 29 August 2025 17:44:39 +0000 (0:00:02.096) 0:01:03.309 ********* 2025-08-29 17:45:55.608185 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:45:55.608194 | orchestrator | 2025-08-29 17:45:55.608202 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-08-29 17:45:55.608210 | orchestrator | Friday 29 August 2025 17:44:41 +0000 (0:00:02.158) 0:01:05.467 ********* 2025-08-29 17:45:55.608226 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:45:55.608234 | orchestrator | 2025-08-29 17:45:55.608242 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 17:45:55.608250 | orchestrator | Friday 29 August 2025 17:45:15 +0000 (0:00:33.706) 0:01:39.174 ********* 2025-08-29 17:45:55.608258 | orchestrator | 2025-08-29 17:45:55.608265 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 17:45:55.608273 | orchestrator | Friday 29 August 2025 17:45:15 +0000 (0:00:00.068) 0:01:39.243 ********* 2025-08-29 17:45:55.608281 | orchestrator | 2025-08-29 17:45:55.608289 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 17:45:55.608297 | orchestrator | Friday 29 August 2025 17:45:15 +0000 (0:00:00.069) 0:01:39.312 ********* 2025-08-29 17:45:55.608304 | orchestrator | 2025-08-29 17:45:55.608316 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-08-29 17:45:55.608332 | orchestrator | Friday 29 August 2025 17:45:15 +0000 (0:00:00.070) 0:01:39.382 ********* 2025-08-29 17:45:55.608340 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:45:55.608348 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:45:55.608356 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:45:55.608364 | orchestrator | 2025-08-29 17:45:55.608371 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-08-29 17:45:55.608379 | orchestrator | Friday 29 August 2025 17:45:33 +0000 (0:00:17.579) 0:01:56.961 ********* 2025-08-29 17:45:55.608387 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:45:55.608395 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:45:55.608403 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:45:55.608411 | orchestrator | 2025-08-29 17:45:55.608418 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:45:55.608427 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:45:55.608437 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 17:45:55.608445 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 17:45:55.608453 | orchestrator | 2025-08-29 17:45:55.608461 | orchestrator | 2025-08-29 17:45:55.608481 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:45:55.608490 | orchestrator | Friday 29 August 2025 17:45:52 +0000 (0:00:19.430) 0:02:16.392 ********* 2025-08-29 17:45:55.608498 | orchestrator | =============================================================================== 2025-08-29 17:45:55.608506 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 33.71s 2025-08-29 17:45:55.608514 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 19.43s 2025-08-29 17:45:55.608521 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 17.58s 2025-08-29 17:45:55.608529 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.64s 2025-08-29 17:45:55.608537 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.67s 2025-08-29 17:45:55.608545 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.27s 2025-08-29 17:45:55.608553 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.18s 2025-08-29 17:45:55.608561 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.86s 2025-08-29 17:45:55.608569 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.62s 2025-08-29 17:45:55.608577 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.59s 2025-08-29 17:45:55.608584 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.46s 2025-08-29 17:45:55.608592 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.34s 2025-08-29 17:45:55.608600 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.26s 2025-08-29 17:45:55.608608 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.57s 2025-08-29 17:45:55.608616 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.55s 2025-08-29 17:45:55.608624 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.52s 2025-08-29 17:45:55.608631 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.47s 2025-08-29 17:45:55.608639 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.16s 2025-08-29 17:45:55.608647 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.10s 2025-08-29 17:45:55.608655 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.43s 2025-08-29 17:45:55.608671 | orchestrator | 2025-08-29 17:45:55 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:55.608679 | orchestrator | 2025-08-29 17:45:55 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:55.608687 | orchestrator | 2025-08-29 17:45:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:45:58.634130 | orchestrator | 2025-08-29 17:45:58 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:45:58.636680 | orchestrator | 2025-08-29 17:45:58 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:45:58.637840 | orchestrator | 2025-08-29 17:45:58 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:45:58.637858 | orchestrator | 2025-08-29 17:45:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:01.684887 | orchestrator | 2025-08-29 17:46:01 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:46:01.686105 | orchestrator | 2025-08-29 17:46:01 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:46:01.687653 | orchestrator | 2025-08-29 17:46:01 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:01.687730 | orchestrator | 2025-08-29 17:46:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:04.732731 | orchestrator | 2025-08-29 17:46:04 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:46:04.732876 | orchestrator | 2025-08-29 17:46:04 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:46:04.733616 | orchestrator | 2025-08-29 17:46:04 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:04.733645 | orchestrator | 2025-08-29 17:46:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:07.774595 | orchestrator | 2025-08-29 17:46:07 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:46:07.777820 | orchestrator | 2025-08-29 17:46:07 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:46:07.779512 | orchestrator | 2025-08-29 17:46:07 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:07.779541 | orchestrator | 2025-08-29 17:46:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:10.818903 | orchestrator | 2025-08-29 17:46:10 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:46:10.819888 | orchestrator | 2025-08-29 17:46:10 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:46:10.821197 | orchestrator | 2025-08-29 17:46:10 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:10.821237 | orchestrator | 2025-08-29 17:46:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:13.870593 | orchestrator | 2025-08-29 17:46:13 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:46:13.871782 | orchestrator | 2025-08-29 17:46:13 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:46:13.873016 | orchestrator | 2025-08-29 17:46:13 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:13.873045 | orchestrator | 2025-08-29 17:46:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:16.917375 | orchestrator | 2025-08-29 17:46:16 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:46:16.918162 | orchestrator | 2025-08-29 17:46:16 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:46:16.919102 | orchestrator | 2025-08-29 17:46:16 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:16.919127 | orchestrator | 2025-08-29 17:46:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:19.966599 | orchestrator | 2025-08-29 17:46:19 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state STARTED 2025-08-29 17:46:19.967082 | orchestrator | 2025-08-29 17:46:19 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:46:19.969663 | orchestrator | 2025-08-29 17:46:19 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:19.969965 | orchestrator | 2025-08-29 17:46:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:23.029101 | orchestrator | 2025-08-29 17:46:23 | INFO  | Task b1fa53de-d472-486b-a697-2d19686d913b is in state SUCCESS 2025-08-29 17:46:23.034422 | orchestrator | 2025-08-29 17:46:23.034513 | orchestrator | 2025-08-29 17:46:23.034531 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:46:23.034543 | orchestrator | 2025-08-29 17:46:23.034554 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:46:23.034565 | orchestrator | Friday 29 August 2025 17:43:43 +0000 (0:00:00.240) 0:00:00.240 ********* 2025-08-29 17:46:23.034577 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:46:23.034589 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:46:23.034603 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:46:23.034616 | orchestrator | 2025-08-29 17:46:23.034627 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:46:23.034639 | orchestrator | Friday 29 August 2025 17:43:43 +0000 (0:00:00.279) 0:00:00.520 ********* 2025-08-29 17:46:23.034650 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-08-29 17:46:23.034662 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-08-29 17:46:23.034673 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-08-29 17:46:23.034684 | orchestrator | 2025-08-29 17:46:23.034695 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-08-29 17:46:23.034706 | orchestrator | 2025-08-29 17:46:23.034717 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-08-29 17:46:23.034727 | orchestrator | Friday 29 August 2025 17:43:44 +0000 (0:00:00.416) 0:00:00.936 ********* 2025-08-29 17:46:23.034738 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:46:23.034750 | orchestrator | 2025-08-29 17:46:23.034761 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-08-29 17:46:23.034815 | orchestrator | Friday 29 August 2025 17:43:44 +0000 (0:00:00.573) 0:00:01.510 ********* 2025-08-29 17:46:23.034833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:46:23.034850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:46:23.034888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:46:23.034901 | orchestrator | 2025-08-29 17:46:23.034912 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-08-29 17:46:23.034923 | orchestrator | Friday 29 August 2025 17:43:46 +0000 (0:00:01.055) 0:00:02.565 ********* 2025-08-29 17:46:23.034933 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-08-29 17:46:23.034945 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-08-29 17:46:23.034956 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 17:46:23.034967 | orchestrator | 2025-08-29 17:46:23.034978 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-08-29 17:46:23.034988 | orchestrator | Friday 29 August 2025 17:43:47 +0000 (0:00:01.166) 0:00:03.732 ********* 2025-08-29 17:46:23.035000 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:46:23.035012 | orchestrator | 2025-08-29 17:46:23.035023 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-08-29 17:46:23.035035 | orchestrator | Friday 29 August 2025 17:43:48 +0000 (0:00:01.029) 0:00:04.762 ********* 2025-08-29 17:46:23.035062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:46:23.035080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:46:23.035093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:46:23.035113 | orchestrator | 2025-08-29 17:46:23.035645 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-08-29 17:46:23.035672 | orchestrator | Friday 29 August 2025 17:43:49 +0000 (0:00:01.701) 0:00:06.464 ********* 2025-08-29 17:46:23.035684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 17:46:23.035696 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:23.035707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 17:46:23.035718 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:23.035739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 17:46:23.035750 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:23.035762 | orchestrator | 2025-08-29 17:46:23.035773 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-08-29 17:46:23.035784 | orchestrator | Friday 29 August 2025 17:43:50 +0000 (0:00:00.581) 0:00:07.046 ********* 2025-08-29 17:46:23.035795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 17:46:23.035814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 17:46:23.036168 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:23.036203 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:23.036215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 17:46:23.036227 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:23.036238 | orchestrator | 2025-08-29 17:46:23.036249 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-08-29 17:46:23.036260 | orchestrator | Friday 29 August 2025 17:43:51 +0000 (0:00:01.016) 0:00:08.062 ********* 2025-08-29 17:46:23.036270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:46:23.036282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:46:23.036328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:46:23.036340 | orchestrator | 2025-08-29 17:46:23.036350 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-08-29 17:46:23.036361 | orchestrator | Friday 29 August 2025 17:43:52 +0000 (0:00:01.381) 0:00:09.443 ********* 2025-08-29 17:46:23.036379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:46:23.036401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:46:23.036412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:46:23.036423 | orchestrator | 2025-08-29 17:46:23.036433 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-08-29 17:46:23.036444 | orchestrator | Friday 29 August 2025 17:43:54 +0000 (0:00:01.612) 0:00:11.056 ********* 2025-08-29 17:46:23.036454 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:23.036464 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:23.036503 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:23.036513 | orchestrator | 2025-08-29 17:46:23.036523 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-08-29 17:46:23.036533 | orchestrator | Friday 29 August 2025 17:43:55 +0000 (0:00:00.603) 0:00:11.660 ********* 2025-08-29 17:46:23.036544 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 17:46:23.036555 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 17:46:23.036565 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 17:46:23.036575 | orchestrator | 2025-08-29 17:46:23.036586 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-08-29 17:46:23.036596 | orchestrator | Friday 29 August 2025 17:43:56 +0000 (0:00:01.349) 0:00:13.009 ********* 2025-08-29 17:46:23.036606 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 17:46:23.036617 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 17:46:23.036628 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 17:46:23.036638 | orchestrator | 2025-08-29 17:46:23.036649 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-08-29 17:46:23.036660 | orchestrator | Friday 29 August 2025 17:43:57 +0000 (0:00:01.302) 0:00:14.311 ********* 2025-08-29 17:46:23.036702 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 17:46:23.036715 | orchestrator | 2025-08-29 17:46:23.036726 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-08-29 17:46:23.036737 | orchestrator | Friday 29 August 2025 17:43:58 +0000 (0:00:00.867) 0:00:15.179 ********* 2025-08-29 17:46:23.036757 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-08-29 17:46:23.036768 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-08-29 17:46:23.036779 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:46:23.036790 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:46:23.036801 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:46:23.036812 | orchestrator | 2025-08-29 17:46:23.036823 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-08-29 17:46:23.036835 | orchestrator | Friday 29 August 2025 17:43:59 +0000 (0:00:00.881) 0:00:16.060 ********* 2025-08-29 17:46:23.036846 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:23.036857 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:23.036868 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:23.036879 | orchestrator | 2025-08-29 17:46:23.036890 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-08-29 17:46:23.036901 | orchestrator | Friday 29 August 2025 17:44:00 +0000 (0:00:00.694) 0:00:16.755 ********* 2025-08-29 17:46:23.036920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1086861, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8803399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.036934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1086861, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8803399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.036945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1086861, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8803399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.036957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1086915, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9009593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.036995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1086915, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9009593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1086915, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9009593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1086875, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1086875, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1086875, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1086919, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9059594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1086919, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9059594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1086919, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9059594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1086884, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8859591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1086884, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8859591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1086884, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8859591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1086907, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8979592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1086907, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8979592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1086907, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8979592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1086859, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8782759, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1086859, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8782759, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1086859, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8782759, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1086867, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.881269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1086867, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.881269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1086867, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.881269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1086877, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1086877, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1086877, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1086901, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.895442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1086901, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.895442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1086901, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.895442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1086914, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9004264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1086914, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9004264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1086914, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9004264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1086872, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.881959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1086872, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.881959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1086872, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.881959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1086905, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8969593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1086905, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8969593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1086905, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8969593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1086899, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8939593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1086899, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8939593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1086899, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8939593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1086883, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.884959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1086883, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.884959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1086883, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.884959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1086881, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8848372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1086881, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8848372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1086881, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8848372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1086903, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8963256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1086903, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8963256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1086903, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8963256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1086880, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.884109, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1086880, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.884109, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1086880, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.884109, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1086909, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8999825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1086909, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8999825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1086909, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.8999825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1087050, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9672287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.037996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1087050, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9672287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1087050, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9672287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1086945, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9239595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1086945, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9239595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1086945, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9239595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1086935, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9099593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1086935, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9099593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1086935, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9099593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1086958, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9279594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1086958, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9279594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1086958, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9279594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1086931, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9073358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1086931, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9073358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1086931, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9073358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1087008, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9541423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1087008, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9541423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1087008, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9541423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1086960, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9497998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1086960, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9497998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1086960, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9497998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1087013, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9548903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1087013, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9548903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1087013, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9548903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1087048, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.96596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1087048, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.96596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1087048, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.96596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1087003, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9530573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1087003, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9530573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1087003, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9530573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1086956, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.92623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1086956, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.92623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1086956, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.92623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1086942, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9169595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1086942, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9169595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1086942, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9169595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1086953, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.92623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1086953, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.92623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1086953, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.92623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1086937, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9149594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1086937, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9149594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1086937, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9149594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1086957, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9269595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1086957, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9269595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1086957, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9269595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1087036, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9629598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1087036, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9629598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1087036, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9629598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1087026, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9575138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1087026, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9575138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1087026, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9575138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1086932, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.908027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1086932, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.908027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1086932, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.908027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1086933, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9089594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1086933, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9089594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.038997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1086933, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9089594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.039011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1086990, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9515393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.039021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1086990, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9515393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.039032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1086990, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9515393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.039042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1087017, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9563413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.039057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1087017, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9563413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.039150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1087017, 'dev': 122, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756486234.9563413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 17:46:23.039160 | orchestrator | 2025-08-29 17:46:23.039170 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-08-29 17:46:23.039179 | orchestrator | Friday 29 August 2025 17:44:37 +0000 (0:00:37.386) 0:00:54.142 ********* 2025-08-29 17:46:23.039189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:46:23.039199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:46:23.039208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:46:23.039218 | orchestrator | 2025-08-29 17:46:23.039227 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-08-29 17:46:23.039236 | orchestrator | Friday 29 August 2025 17:44:38 +0000 (0:00:01.032) 0:00:55.174 ********* 2025-08-29 17:46:23.039245 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:23.039255 | orchestrator | 2025-08-29 17:46:23.039264 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-08-29 17:46:23.039273 | orchestrator | Friday 29 August 2025 17:44:40 +0000 (0:00:02.303) 0:00:57.478 ********* 2025-08-29 17:46:23.039282 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:23.039291 | orchestrator | 2025-08-29 17:46:23.039300 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 17:46:23.039309 | orchestrator | Friday 29 August 2025 17:44:43 +0000 (0:00:02.120) 0:00:59.598 ********* 2025-08-29 17:46:23.039318 | orchestrator | 2025-08-29 17:46:23.039327 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 17:46:23.039365 | orchestrator | Friday 29 August 2025 17:44:43 +0000 (0:00:00.069) 0:00:59.667 ********* 2025-08-29 17:46:23.039374 | orchestrator | 2025-08-29 17:46:23.039383 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 17:46:23.039392 | orchestrator | Friday 29 August 2025 17:44:43 +0000 (0:00:00.076) 0:00:59.743 ********* 2025-08-29 17:46:23.039401 | orchestrator | 2025-08-29 17:46:23.039411 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-08-29 17:46:23.039420 | orchestrator | Friday 29 August 2025 17:44:43 +0000 (0:00:00.313) 0:01:00.057 ********* 2025-08-29 17:46:23.039429 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:23.039438 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:23.039447 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:23.039456 | orchestrator | 2025-08-29 17:46:23.039465 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-08-29 17:46:23.039539 | orchestrator | Friday 29 August 2025 17:44:45 +0000 (0:00:01.974) 0:01:02.031 ********* 2025-08-29 17:46:23.039549 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:23.039559 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:23.039569 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-08-29 17:46:23.039579 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-08-29 17:46:23.039589 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-08-29 17:46:23.039599 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2025-08-29 17:46:23.039609 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:46:23.039619 | orchestrator | 2025-08-29 17:46:23.039634 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-08-29 17:46:23.039646 | orchestrator | Friday 29 August 2025 17:45:35 +0000 (0:00:50.255) 0:01:52.287 ********* 2025-08-29 17:46:23.039656 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:23.039666 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:46:23.039676 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:46:23.039686 | orchestrator | 2025-08-29 17:46:23.039697 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-08-29 17:46:23.039708 | orchestrator | Friday 29 August 2025 17:46:14 +0000 (0:00:38.806) 0:02:31.094 ********* 2025-08-29 17:46:23.039718 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:46:23.039728 | orchestrator | 2025-08-29 17:46:23.039738 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-08-29 17:46:23.039749 | orchestrator | Friday 29 August 2025 17:46:16 +0000 (0:00:02.086) 0:02:33.180 ********* 2025-08-29 17:46:23.039759 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:23.039769 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:23.039780 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:23.039789 | orchestrator | 2025-08-29 17:46:23.039799 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-08-29 17:46:23.039809 | orchestrator | Friday 29 August 2025 17:46:17 +0000 (0:00:00.548) 0:02:33.729 ********* 2025-08-29 17:46:23.039819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-08-29 17:46:23.039831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-08-29 17:46:23.039842 | orchestrator | 2025-08-29 17:46:23.039857 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-08-29 17:46:23.039867 | orchestrator | Friday 29 August 2025 17:46:19 +0000 (0:00:02.401) 0:02:36.130 ********* 2025-08-29 17:46:23.039876 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:23.039885 | orchestrator | 2025-08-29 17:46:23.039894 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:46:23.039904 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 17:46:23.039914 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 17:46:23.039923 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 17:46:23.039932 | orchestrator | 2025-08-29 17:46:23.039941 | orchestrator | 2025-08-29 17:46:23.039950 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:46:23.039959 | orchestrator | Friday 29 August 2025 17:46:20 +0000 (0:00:00.399) 0:02:36.530 ********* 2025-08-29 17:46:23.039968 | orchestrator | =============================================================================== 2025-08-29 17:46:23.039977 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.26s 2025-08-29 17:46:23.039987 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 38.81s 2025-08-29 17:46:23.039996 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.39s 2025-08-29 17:46:23.040007 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.40s 2025-08-29 17:46:23.040022 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.30s 2025-08-29 17:46:23.040032 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.12s 2025-08-29 17:46:23.040042 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.09s 2025-08-29 17:46:23.040051 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.97s 2025-08-29 17:46:23.040061 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.70s 2025-08-29 17:46:23.040070 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.61s 2025-08-29 17:46:23.040080 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.38s 2025-08-29 17:46:23.040089 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.35s 2025-08-29 17:46:23.040099 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.30s 2025-08-29 17:46:23.040108 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.17s 2025-08-29 17:46:23.040117 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.06s 2025-08-29 17:46:23.040126 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.03s 2025-08-29 17:46:23.040135 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.03s 2025-08-29 17:46:23.040144 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.02s 2025-08-29 17:46:23.040155 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.88s 2025-08-29 17:46:23.040168 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.87s 2025-08-29 17:46:23.040179 | orchestrator | 2025-08-29 17:46:23 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:46:23.040188 | orchestrator | 2025-08-29 17:46:23 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:23.040198 | orchestrator | 2025-08-29 17:46:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:26.085755 | orchestrator | 2025-08-29 17:46:26 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:46:26.088287 | orchestrator | 2025-08-29 17:46:26 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:26.088324 | orchestrator | 2025-08-29 17:46:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:29.132010 | orchestrator | 2025-08-29 17:46:29 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:46:29.132762 | orchestrator | 2025-08-29 17:46:29 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:29.132792 | orchestrator | 2025-08-29 17:46:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:32.180631 | orchestrator | 2025-08-29 17:46:32 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state STARTED 2025-08-29 17:46:32.185553 | orchestrator | 2025-08-29 17:46:32 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:32.185598 | orchestrator | 2025-08-29 17:46:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:35.227560 | orchestrator | 2025-08-29 17:46:35 | INFO  | Task 9b13d01f-e4b7-4477-8bd6-331ea64a76f2 is in state SUCCESS 2025-08-29 17:46:35.228952 | orchestrator | 2025-08-29 17:46:35.229027 | orchestrator | 2025-08-29 17:46:35.229043 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:46:35.229056 | orchestrator | 2025-08-29 17:46:35.229067 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-08-29 17:46:35.229079 | orchestrator | Friday 29 August 2025 17:35:57 +0000 (0:00:00.343) 0:00:00.343 ********* 2025-08-29 17:46:35.229090 | orchestrator | changed: [testbed-manager] 2025-08-29 17:46:35.229102 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.229113 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:46:35.229124 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:46:35.229135 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:46:35.229145 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:46:35.229156 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:46:35.229167 | orchestrator | 2025-08-29 17:46:35.229178 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:46:35.229189 | orchestrator | Friday 29 August 2025 17:35:59 +0000 (0:00:01.747) 0:00:02.091 ********* 2025-08-29 17:46:35.229199 | orchestrator | changed: [testbed-manager] 2025-08-29 17:46:35.229210 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.229221 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:46:35.229232 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:46:35.229243 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:46:35.229253 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:46:35.230280 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:46:35.230403 | orchestrator | 2025-08-29 17:46:35.230433 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:46:35.230457 | orchestrator | Friday 29 August 2025 17:36:00 +0000 (0:00:01.129) 0:00:03.221 ********* 2025-08-29 17:46:35.230516 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-08-29 17:46:35.230529 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-08-29 17:46:35.230540 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-08-29 17:46:35.230551 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-08-29 17:46:35.230561 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-08-29 17:46:35.230572 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-08-29 17:46:35.230583 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-08-29 17:46:35.230594 | orchestrator | 2025-08-29 17:46:35.230605 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-08-29 17:46:35.230616 | orchestrator | 2025-08-29 17:46:35.230627 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-08-29 17:46:35.230675 | orchestrator | Friday 29 August 2025 17:36:02 +0000 (0:00:01.918) 0:00:05.139 ********* 2025-08-29 17:46:35.230687 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:46:35.230698 | orchestrator | 2025-08-29 17:46:35.230709 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-08-29 17:46:35.230720 | orchestrator | Friday 29 August 2025 17:36:04 +0000 (0:00:01.703) 0:00:06.843 ********* 2025-08-29 17:46:35.230732 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-08-29 17:46:35.230742 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-08-29 17:46:35.230753 | orchestrator | 2025-08-29 17:46:35.230764 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-08-29 17:46:35.230775 | orchestrator | Friday 29 August 2025 17:36:08 +0000 (0:00:03.705) 0:00:10.549 ********* 2025-08-29 17:46:35.230785 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 17:46:35.230796 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 17:46:35.230807 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.230818 | orchestrator | 2025-08-29 17:46:35.230829 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-08-29 17:46:35.230856 | orchestrator | Friday 29 August 2025 17:36:11 +0000 (0:00:03.456) 0:00:14.006 ********* 2025-08-29 17:46:35.230867 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.230878 | orchestrator | 2025-08-29 17:46:35.230889 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-08-29 17:46:35.230900 | orchestrator | Friday 29 August 2025 17:36:12 +0000 (0:00:00.784) 0:00:14.790 ********* 2025-08-29 17:46:35.230910 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.230921 | orchestrator | 2025-08-29 17:46:35.230932 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-08-29 17:46:35.230943 | orchestrator | Friday 29 August 2025 17:36:14 +0000 (0:00:02.303) 0:00:17.093 ********* 2025-08-29 17:46:35.230953 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.230964 | orchestrator | 2025-08-29 17:46:35.230974 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 17:46:35.230985 | orchestrator | Friday 29 August 2025 17:36:19 +0000 (0:00:04.396) 0:00:21.489 ********* 2025-08-29 17:46:35.230996 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.231007 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.231018 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.231029 | orchestrator | 2025-08-29 17:46:35.231039 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-08-29 17:46:35.231050 | orchestrator | Friday 29 August 2025 17:36:19 +0000 (0:00:00.387) 0:00:21.877 ********* 2025-08-29 17:46:35.231061 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:46:35.231072 | orchestrator | 2025-08-29 17:46:35.231082 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-08-29 17:46:35.231093 | orchestrator | Friday 29 August 2025 17:36:48 +0000 (0:00:28.977) 0:00:50.854 ********* 2025-08-29 17:46:35.231103 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.231114 | orchestrator | 2025-08-29 17:46:35.231125 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 17:46:35.231136 | orchestrator | Friday 29 August 2025 17:37:05 +0000 (0:00:16.536) 0:01:07.391 ********* 2025-08-29 17:46:35.231146 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:46:35.231157 | orchestrator | 2025-08-29 17:46:35.231167 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 17:46:35.231178 | orchestrator | Friday 29 August 2025 17:37:18 +0000 (0:00:13.736) 0:01:21.127 ********* 2025-08-29 17:46:35.231211 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:46:35.231222 | orchestrator | 2025-08-29 17:46:35.231234 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-08-29 17:46:35.231245 | orchestrator | Friday 29 August 2025 17:37:19 +0000 (0:00:01.073) 0:01:22.200 ********* 2025-08-29 17:46:35.231255 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.231275 | orchestrator | 2025-08-29 17:46:35.231286 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 17:46:35.231296 | orchestrator | Friday 29 August 2025 17:37:20 +0000 (0:00:00.504) 0:01:22.704 ********* 2025-08-29 17:46:35.231308 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:46:35.231319 | orchestrator | 2025-08-29 17:46:35.231330 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-08-29 17:46:35.231340 | orchestrator | Friday 29 August 2025 17:37:21 +0000 (0:00:01.233) 0:01:23.938 ********* 2025-08-29 17:46:35.231351 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:46:35.231361 | orchestrator | 2025-08-29 17:46:35.231372 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-08-29 17:46:35.231383 | orchestrator | Friday 29 August 2025 17:37:37 +0000 (0:00:16.034) 0:01:39.973 ********* 2025-08-29 17:46:35.231394 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.231404 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.231415 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.231426 | orchestrator | 2025-08-29 17:46:35.231443 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-08-29 17:46:35.231461 | orchestrator | 2025-08-29 17:46:35.231500 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-08-29 17:46:35.231518 | orchestrator | Friday 29 August 2025 17:37:37 +0000 (0:00:00.390) 0:01:40.363 ********* 2025-08-29 17:46:35.231536 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:46:35.231555 | orchestrator | 2025-08-29 17:46:35.231574 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-08-29 17:46:35.231592 | orchestrator | Friday 29 August 2025 17:37:38 +0000 (0:00:00.661) 0:01:41.025 ********* 2025-08-29 17:46:35.231609 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.231629 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.231648 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.231668 | orchestrator | 2025-08-29 17:46:35.231686 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-08-29 17:46:35.231705 | orchestrator | Friday 29 August 2025 17:37:40 +0000 (0:00:01.975) 0:01:43.000 ********* 2025-08-29 17:46:35.231722 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.231740 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.231758 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.231776 | orchestrator | 2025-08-29 17:46:35.231794 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-08-29 17:46:35.231812 | orchestrator | Friday 29 August 2025 17:37:42 +0000 (0:00:02.206) 0:01:45.206 ********* 2025-08-29 17:46:35.231828 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.231845 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.231862 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.231880 | orchestrator | 2025-08-29 17:46:35.231898 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-08-29 17:46:35.231918 | orchestrator | Friday 29 August 2025 17:37:43 +0000 (0:00:00.359) 0:01:45.566 ********* 2025-08-29 17:46:35.231936 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 17:46:35.231954 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.231972 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 17:46:35.231990 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.232008 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-08-29 17:46:35.232038 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-08-29 17:46:35.232057 | orchestrator | 2025-08-29 17:46:35.232076 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-08-29 17:46:35.232094 | orchestrator | Friday 29 August 2025 17:37:51 +0000 (0:00:08.448) 0:01:54.014 ********* 2025-08-29 17:46:35.232111 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.232127 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.232157 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.232176 | orchestrator | 2025-08-29 17:46:35.232192 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-08-29 17:46:35.232209 | orchestrator | Friday 29 August 2025 17:37:52 +0000 (0:00:00.383) 0:01:54.398 ********* 2025-08-29 17:46:35.232228 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 17:46:35.232246 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.232264 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 17:46:35.232281 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.232298 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 17:46:35.232316 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.232334 | orchestrator | 2025-08-29 17:46:35.232350 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-08-29 17:46:35.232367 | orchestrator | Friday 29 August 2025 17:37:52 +0000 (0:00:00.708) 0:01:55.107 ********* 2025-08-29 17:46:35.232385 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.232402 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.232421 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.232438 | orchestrator | 2025-08-29 17:46:35.232457 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-08-29 17:46:35.232519 | orchestrator | Friday 29 August 2025 17:37:53 +0000 (0:00:00.474) 0:01:55.581 ********* 2025-08-29 17:46:35.232540 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.232558 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.232575 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.232594 | orchestrator | 2025-08-29 17:46:35.232612 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-08-29 17:46:35.232631 | orchestrator | Friday 29 August 2025 17:37:54 +0000 (0:00:00.943) 0:01:56.525 ********* 2025-08-29 17:46:35.232649 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.232668 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.232954 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.232991 | orchestrator | 2025-08-29 17:46:35.233009 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-08-29 17:46:35.233026 | orchestrator | Friday 29 August 2025 17:37:56 +0000 (0:00:02.185) 0:01:58.710 ********* 2025-08-29 17:46:35.233042 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.233059 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.233077 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:46:35.233094 | orchestrator | 2025-08-29 17:46:35.233112 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 17:46:35.233130 | orchestrator | Friday 29 August 2025 17:38:17 +0000 (0:00:20.856) 0:02:19.567 ********* 2025-08-29 17:46:35.233147 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.233165 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.233182 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:46:35.233200 | orchestrator | 2025-08-29 17:46:35.233218 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 17:46:35.233236 | orchestrator | Friday 29 August 2025 17:38:29 +0000 (0:00:12.486) 0:02:32.054 ********* 2025-08-29 17:46:35.233253 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.233271 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:46:35.233289 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.233307 | orchestrator | 2025-08-29 17:46:35.233325 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-08-29 17:46:35.233342 | orchestrator | Friday 29 August 2025 17:38:32 +0000 (0:00:02.867) 0:02:34.921 ********* 2025-08-29 17:46:35.233360 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.233379 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.233397 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.233415 | orchestrator | 2025-08-29 17:46:35.233435 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-08-29 17:46:35.233454 | orchestrator | Friday 29 August 2025 17:38:45 +0000 (0:00:12.812) 0:02:47.734 ********* 2025-08-29 17:46:35.233555 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.233579 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.233599 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.233617 | orchestrator | 2025-08-29 17:46:35.233637 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-08-29 17:46:35.233656 | orchestrator | Friday 29 August 2025 17:38:46 +0000 (0:00:01.166) 0:02:48.901 ********* 2025-08-29 17:46:35.233675 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.233693 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.233712 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.233730 | orchestrator | 2025-08-29 17:46:35.233748 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-08-29 17:46:35.233767 | orchestrator | 2025-08-29 17:46:35.233784 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 17:46:35.233804 | orchestrator | Friday 29 August 2025 17:38:47 +0000 (0:00:00.555) 0:02:49.456 ********* 2025-08-29 17:46:35.233823 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:46:35.233844 | orchestrator | 2025-08-29 17:46:35.233865 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-08-29 17:46:35.233883 | orchestrator | Friday 29 August 2025 17:38:47 +0000 (0:00:00.689) 0:02:50.146 ********* 2025-08-29 17:46:35.233902 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-08-29 17:46:35.233919 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-08-29 17:46:35.233936 | orchestrator | 2025-08-29 17:46:35.233954 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-08-29 17:46:35.233972 | orchestrator | Friday 29 August 2025 17:38:50 +0000 (0:00:03.075) 0:02:53.221 ********* 2025-08-29 17:46:35.234003 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-08-29 17:46:35.234070 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-08-29 17:46:35.234090 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-08-29 17:46:35.234109 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-08-29 17:46:35.234126 | orchestrator | 2025-08-29 17:46:35.234147 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-08-29 17:46:35.234166 | orchestrator | Friday 29 August 2025 17:38:56 +0000 (0:00:05.928) 0:02:59.150 ********* 2025-08-29 17:46:35.234185 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 17:46:35.234202 | orchestrator | 2025-08-29 17:46:35.234218 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-08-29 17:46:35.234236 | orchestrator | Friday 29 August 2025 17:38:59 +0000 (0:00:02.798) 0:03:01.948 ********* 2025-08-29 17:46:35.234253 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 17:46:35.234271 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-08-29 17:46:35.234288 | orchestrator | 2025-08-29 17:46:35.234306 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-08-29 17:46:35.234323 | orchestrator | Friday 29 August 2025 17:39:03 +0000 (0:00:03.757) 0:03:05.705 ********* 2025-08-29 17:46:35.234342 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 17:46:35.234359 | orchestrator | 2025-08-29 17:46:35.234381 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-08-29 17:46:35.234398 | orchestrator | Friday 29 August 2025 17:39:06 +0000 (0:00:03.133) 0:03:08.839 ********* 2025-08-29 17:46:35.234414 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-08-29 17:46:35.234431 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-08-29 17:46:35.234461 | orchestrator | 2025-08-29 17:46:35.234505 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-08-29 17:46:35.234747 | orchestrator | Friday 29 August 2025 17:39:13 +0000 (0:00:07.288) 0:03:16.127 ********* 2025-08-29 17:46:35.234786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:46:35.235961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:46:35.236019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:46:35.236134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.236180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.236201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.236219 | orchestrator | 2025-08-29 17:46:35.236238 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-08-29 17:46:35.236257 | orchestrator | Friday 29 August 2025 17:39:15 +0000 (0:00:01.730) 0:03:17.858 ********* 2025-08-29 17:46:35.236275 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.236294 | orchestrator | 2025-08-29 17:46:35.236313 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-08-29 17:46:35.236331 | orchestrator | Friday 29 August 2025 17:39:15 +0000 (0:00:00.368) 0:03:18.227 ********* 2025-08-29 17:46:35.236350 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.236368 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.236387 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.236404 | orchestrator | 2025-08-29 17:46:35.236421 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-08-29 17:46:35.236439 | orchestrator | Friday 29 August 2025 17:39:16 +0000 (0:00:00.765) 0:03:18.992 ********* 2025-08-29 17:46:35.236458 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 17:46:35.236555 | orchestrator | 2025-08-29 17:46:35.236577 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-08-29 17:46:35.236595 | orchestrator | Friday 29 August 2025 17:39:17 +0000 (0:00:01.150) 0:03:20.142 ********* 2025-08-29 17:46:35.236614 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.236632 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.236650 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.236668 | orchestrator | 2025-08-29 17:46:35.236685 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 17:46:35.236701 | orchestrator | Friday 29 August 2025 17:39:18 +0000 (0:00:00.365) 0:03:20.508 ********* 2025-08-29 17:46:35.236729 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:46:35.236749 | orchestrator | 2025-08-29 17:46:35.236764 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-08-29 17:46:35.236781 | orchestrator | Friday 29 August 2025 17:39:18 +0000 (0:00:00.580) 0:03:21.088 ********* 2025-08-29 17:46:35.236800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:46:35.237966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:46:35.238005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:46:35.238096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.238131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.238209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.238233 | orchestrator | 2025-08-29 17:46:35.238250 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-08-29 17:46:35.238268 | orchestrator | Friday 29 August 2025 17:39:21 +0000 (0:00:02.704) 0:03:23.793 ********* 2025-08-29 17:46:35.238286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 17:46:35.238304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.238322 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.238357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 17:46:35.238387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.238403 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.238468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 17:46:35.238515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.238533 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.238548 | orchestrator | 2025-08-29 17:46:35.238564 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-08-29 17:46:35.238579 | orchestrator | Friday 29 August 2025 17:39:23 +0000 (0:00:01.781) 0:03:25.574 ********* 2025-08-29 17:46:35.238605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 17:46:35.238637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.238653 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.238721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 17:46:35.238740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.238756 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.238779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 17:46:35.238805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.238820 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.238835 | orchestrator | 2025-08-29 17:46:35.238850 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-08-29 17:46:35.238866 | orchestrator | Friday 29 August 2025 17:39:24 +0000 (0:00:00.940) 0:03:26.515 ********* 2025-08-29 17:46:35.238932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:46:35.238955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:46:35.238981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:46:35.239011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.239077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.239099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.239115 | orchestrator | 2025-08-29 17:46:35.239131 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-08-29 17:46:35.239148 | orchestrator | Friday 29 August 2025 17:39:27 +0000 (0:00:03.168) 0:03:29.684 ********* 2025-08-29 17:46:35.239166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:46:35.239202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:46:35.239272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:46:35.239296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.239314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.239345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.239365 | orchestrator | 2025-08-29 17:46:35.239382 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-08-29 17:46:35.239399 | orchestrator | Friday 29 August 2025 17:39:38 +0000 (0:00:10.910) 0:03:40.595 ********* 2025-08-29 17:46:35.239425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 17:46:35.239567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.239593 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.239610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 17:46:35.239629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.239660 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.239680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 17:46:35.239696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.239712 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.239724 | orchestrator | 2025-08-29 17:46:35.239738 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-08-29 17:46:35.239751 | orchestrator | Friday 29 August 2025 17:39:40 +0000 (0:00:01.861) 0:03:42.456 ********* 2025-08-29 17:46:35.239764 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.239777 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:46:35.239790 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:46:35.239803 | orchestrator | 2025-08-29 17:46:35.239857 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-08-29 17:46:35.239873 | orchestrator | Friday 29 August 2025 17:39:42 +0000 (0:00:02.327) 0:03:44.784 ********* 2025-08-29 17:46:35.239885 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.239898 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.239910 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.239924 | orchestrator | 2025-08-29 17:46:35.239938 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-08-29 17:46:35.239951 | orchestrator | Friday 29 August 2025 17:39:43 +0000 (0:00:00.804) 0:03:45.589 ********* 2025-08-29 17:46:35.239965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:46:35.239995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.240022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:46:35.240077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:46:35.240096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.240119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.240133 | orchestrator | 2025-08-29 17:46:35.240145 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 17:46:35.240159 | orchestrator | Friday 29 August 2025 17:39:46 +0000 (0:00:03.476) 0:03:49.065 ********* 2025-08-29 17:46:35.240172 | orchestrator | 2025-08-29 17:46:35.240185 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 17:46:35.240198 | orchestrator | Friday 29 August 2025 17:39:47 +0000 (0:00:00.335) 0:03:49.400 ********* 2025-08-29 17:46:35.240211 | orchestrator | 2025-08-29 17:46:35.240225 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 17:46:35.240238 | orchestrator | Friday 29 August 2025 17:39:47 +0000 (0:00:00.393) 0:03:49.794 ********* 2025-08-29 17:46:35.240251 | orchestrator | 2025-08-29 17:46:35.240264 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-08-29 17:46:35.240277 | orchestrator | Friday 29 August 2025 17:39:47 +0000 (0:00:00.529) 0:03:50.323 ********* 2025-08-29 17:46:35.240292 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.240306 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:46:35.240320 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:46:35.240334 | orchestrator | 2025-08-29 17:46:35.240379 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-08-29 17:46:35.240394 | orchestrator | Friday 29 August 2025 17:40:13 +0000 (0:00:25.244) 0:04:15.568 ********* 2025-08-29 17:46:35.240408 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:46:35.240421 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.240434 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:46:35.240447 | orchestrator | 2025-08-29 17:46:35.240460 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-08-29 17:46:35.240494 | orchestrator | 2025-08-29 17:46:35.240506 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 17:46:35.240514 | orchestrator | Friday 29 August 2025 17:40:26 +0000 (0:00:13.332) 0:04:28.900 ********* 2025-08-29 17:46:35.240524 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:46:35.240533 | orchestrator | 2025-08-29 17:46:35.240543 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 17:46:35.240557 | orchestrator | Friday 29 August 2025 17:40:28 +0000 (0:00:02.006) 0:04:30.907 ********* 2025-08-29 17:46:35.240569 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.240582 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.240594 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.240607 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.240621 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.240633 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.240645 | orchestrator | 2025-08-29 17:46:35.240658 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-08-29 17:46:35.241586 | orchestrator | Friday 29 August 2025 17:40:29 +0000 (0:00:00.693) 0:04:31.601 ********* 2025-08-29 17:46:35.241616 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.241630 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.241642 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.241656 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:46:35.241669 | orchestrator | 2025-08-29 17:46:35.241685 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 17:46:35.241765 | orchestrator | Friday 29 August 2025 17:40:30 +0000 (0:00:01.687) 0:04:33.289 ********* 2025-08-29 17:46:35.241783 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-08-29 17:46:35.241797 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-08-29 17:46:35.241810 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-08-29 17:46:35.241823 | orchestrator | 2025-08-29 17:46:35.241836 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 17:46:35.241850 | orchestrator | Friday 29 August 2025 17:40:31 +0000 (0:00:00.965) 0:04:34.254 ********* 2025-08-29 17:46:35.241863 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-08-29 17:46:35.241875 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-08-29 17:46:35.241888 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-08-29 17:46:35.241901 | orchestrator | 2025-08-29 17:46:35.241915 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 17:46:35.241929 | orchestrator | Friday 29 August 2025 17:40:33 +0000 (0:00:01.421) 0:04:35.676 ********* 2025-08-29 17:46:35.241941 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-08-29 17:46:35.241954 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.241967 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-08-29 17:46:35.241979 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.241993 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-08-29 17:46:35.242006 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.242059 | orchestrator | 2025-08-29 17:46:35.242073 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-08-29 17:46:35.242085 | orchestrator | Friday 29 August 2025 17:40:35 +0000 (0:00:02.047) 0:04:37.724 ********* 2025-08-29 17:46:35.242098 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 17:46:35.242112 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 17:46:35.242124 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 17:46:35.242136 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 17:46:35.242149 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.242161 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 17:46:35.242174 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 17:46:35.242187 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 17:46:35.242199 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.242213 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 17:46:35.242225 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 17:46:35.242238 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 17:46:35.242251 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.242264 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 17:46:35.242276 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 17:46:35.242289 | orchestrator | 2025-08-29 17:46:35.242301 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-08-29 17:46:35.242328 | orchestrator | Friday 29 August 2025 17:40:36 +0000 (0:00:01.603) 0:04:39.327 ********* 2025-08-29 17:46:35.242341 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:46:35.242354 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.242366 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.242388 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:46:35.242401 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:46:35.242415 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.242427 | orchestrator | 2025-08-29 17:46:35.242440 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-08-29 17:46:35.242453 | orchestrator | Friday 29 August 2025 17:40:39 +0000 (0:00:02.983) 0:04:42.311 ********* 2025-08-29 17:46:35.242467 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.242540 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.242553 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.242567 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:46:35.242579 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:46:35.242591 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:46:35.242604 | orchestrator | 2025-08-29 17:46:35.242615 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-08-29 17:46:35.242627 | orchestrator | Friday 29 August 2025 17:40:42 +0000 (0:00:02.196) 0:04:44.508 ********* 2025-08-29 17:46:35.242642 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 17:46:35.242718 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 17:46:35.242737 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.242752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 17:46:35.242785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 17:46:35.242799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 17:46:35.242847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 17:46:35.242862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.242877 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.242898 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 17:46:35.242917 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.242931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 17:46:35.242977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 17:46:35.242994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243021 | orchestrator | 2025-08-29 17:46:35.243034 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 17:46:35.243056 | orchestrator | Friday 29 August 2025 17:40:48 +0000 (0:00:05.929) 0:04:50.438 ********* 2025-08-29 17:46:35.243068 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:46:35.243080 | orchestrator | 2025-08-29 17:46:35.243092 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-08-29 17:46:35.243104 | orchestrator | Friday 29 August 2025 17:40:50 +0000 (0:00:02.785) 0:04:53.223 ********* 2025-08-29 17:46:35.243116 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243134 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243180 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243241 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243257 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243355 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243378 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.243403 | orchestrator | 2025-08-29 17:46:35.243415 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-08-29 17:46:35.243427 | orchestrator | Friday 29 August 2025 17:40:55 +0000 (0:00:04.981) 0:04:58.205 ********* 2025-08-29 17:46:35.243493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 17:46:35.244315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 17:46:35.244339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.244351 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.244371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 17:46:35.244383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 17:46:35.244445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.244462 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.244490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 17:46:35.244514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 17:46:35.244527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.244546 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.244559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 17:46:35.244571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.244582 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.244629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 17:46:35.244655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.244668 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.244679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 17:46:35.244691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.244703 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.244714 | orchestrator | 2025-08-29 17:46:35.244726 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-08-29 17:46:35.244737 | orchestrator | Friday 29 August 2025 17:41:01 +0000 (0:00:05.790) 0:05:03.995 ********* 2025-08-29 17:46:35.244754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 17:46:35.244767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 17:46:35.244817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.244838 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.244849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 17:46:35.244860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 17:46:35.244876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.244888 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.244899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 17:46:35.244942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 17:46:35.244962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.244974 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.244985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 17:46:35.244996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.245008 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.245022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 17:46:35.245033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.245043 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.245054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 17:46:35.245102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.245115 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.245126 | orchestrator | 2025-08-29 17:46:35.245136 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 17:46:35.245147 | orchestrator | Friday 29 August 2025 17:41:06 +0000 (0:00:04.709) 0:05:08.705 ********* 2025-08-29 17:46:35.245158 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.245169 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.245178 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.245189 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:46:35.245200 | orchestrator | 2025-08-29 17:46:35.245210 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-08-29 17:46:35.245220 | orchestrator | Friday 29 August 2025 17:41:08 +0000 (0:00:02.037) 0:05:10.743 ********* 2025-08-29 17:46:35.245230 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 17:46:35.245241 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 17:46:35.245252 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 17:46:35.245263 | orchestrator | 2025-08-29 17:46:35.245273 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-08-29 17:46:35.245285 | orchestrator | Friday 29 August 2025 17:41:10 +0000 (0:00:02.508) 0:05:13.251 ********* 2025-08-29 17:46:35.245296 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 17:46:35.245307 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 17:46:35.245319 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 17:46:35.245331 | orchestrator | 2025-08-29 17:46:35.245342 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-08-29 17:46:35.245352 | orchestrator | Friday 29 August 2025 17:41:13 +0000 (0:00:02.584) 0:05:15.836 ********* 2025-08-29 17:46:35.245363 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:46:35.245374 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:46:35.245384 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:46:35.245395 | orchestrator | 2025-08-29 17:46:35.245406 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-08-29 17:46:35.245416 | orchestrator | Friday 29 August 2025 17:41:15 +0000 (0:00:02.270) 0:05:18.107 ********* 2025-08-29 17:46:35.245427 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:46:35.245437 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:46:35.245448 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:46:35.245460 | orchestrator | 2025-08-29 17:46:35.245524 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-08-29 17:46:35.245540 | orchestrator | Friday 29 August 2025 17:41:17 +0000 (0:00:01.810) 0:05:19.917 ********* 2025-08-29 17:46:35.245551 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 17:46:35.245562 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 17:46:35.245608 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 17:46:35.245621 | orchestrator | 2025-08-29 17:46:35.245632 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-08-29 17:46:35.245644 | orchestrator | Friday 29 August 2025 17:41:19 +0000 (0:00:02.011) 0:05:21.929 ********* 2025-08-29 17:46:35.245663 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 17:46:35.245675 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 17:46:35.245687 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 17:46:35.245698 | orchestrator | 2025-08-29 17:46:35.245709 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-08-29 17:46:35.245721 | orchestrator | Friday 29 August 2025 17:41:21 +0000 (0:00:02.397) 0:05:24.327 ********* 2025-08-29 17:46:35.245731 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 17:46:35.245743 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 17:46:35.245753 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 17:46:35.245764 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-08-29 17:46:35.245775 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-08-29 17:46:35.245786 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-08-29 17:46:35.245797 | orchestrator | 2025-08-29 17:46:35.245808 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-08-29 17:46:35.245819 | orchestrator | Friday 29 August 2025 17:41:29 +0000 (0:00:07.850) 0:05:32.177 ********* 2025-08-29 17:46:35.245830 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.245841 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.245851 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.245862 | orchestrator | 2025-08-29 17:46:35.245873 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-08-29 17:46:35.245884 | orchestrator | Friday 29 August 2025 17:41:31 +0000 (0:00:01.308) 0:05:33.486 ********* 2025-08-29 17:46:35.245895 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.245906 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.245917 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.245929 | orchestrator | 2025-08-29 17:46:35.245940 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-08-29 17:46:35.245951 | orchestrator | Friday 29 August 2025 17:41:31 +0000 (0:00:00.449) 0:05:33.935 ********* 2025-08-29 17:46:35.245962 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:46:35.245975 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:46:35.245986 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:46:35.245997 | orchestrator | 2025-08-29 17:46:35.246083 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-08-29 17:46:35.246093 | orchestrator | Friday 29 August 2025 17:41:34 +0000 (0:00:03.226) 0:05:37.162 ********* 2025-08-29 17:46:35.246100 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 17:46:35.246108 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 17:46:35.246114 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 17:46:35.246120 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 17:46:35.246127 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 17:46:35.246133 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 17:46:35.246140 | orchestrator | 2025-08-29 17:46:35.246146 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-08-29 17:46:35.246159 | orchestrator | Friday 29 August 2025 17:41:41 +0000 (0:00:06.592) 0:05:43.755 ********* 2025-08-29 17:46:35.246166 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 17:46:35.246172 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 17:46:35.246178 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 17:46:35.246184 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 17:46:35.246190 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:46:35.246196 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 17:46:35.246202 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:46:35.246208 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 17:46:35.246214 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:46:35.246220 | orchestrator | 2025-08-29 17:46:35.246226 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-08-29 17:46:35.246233 | orchestrator | Friday 29 August 2025 17:41:47 +0000 (0:00:05.633) 0:05:49.389 ********* 2025-08-29 17:46:35.246239 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.246245 | orchestrator | 2025-08-29 17:46:35.246251 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-08-29 17:46:35.246257 | orchestrator | Friday 29 August 2025 17:41:47 +0000 (0:00:00.223) 0:05:49.612 ********* 2025-08-29 17:46:35.246263 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.246269 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.246275 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.246281 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.246287 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.246293 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.246299 | orchestrator | 2025-08-29 17:46:35.246305 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-08-29 17:46:35.246312 | orchestrator | Friday 29 August 2025 17:41:47 +0000 (0:00:00.737) 0:05:50.349 ********* 2025-08-29 17:46:35.246318 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 17:46:35.246324 | orchestrator | 2025-08-29 17:46:35.246330 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-08-29 17:46:35.246336 | orchestrator | Friday 29 August 2025 17:41:48 +0000 (0:00:00.953) 0:05:51.303 ********* 2025-08-29 17:46:35.246350 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.246357 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.246363 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.246369 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.246375 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.246381 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.246387 | orchestrator | 2025-08-29 17:46:35.246393 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-08-29 17:46:35.246399 | orchestrator | Friday 29 August 2025 17:41:50 +0000 (0:00:01.170) 0:05:52.473 ********* 2025-08-29 17:46:35.246406 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 17:46:35.246420 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247150 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247189 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247214 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247255 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247267 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247279 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247286 | orchestrator | 2025-08-29 17:46:35.247293 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-08-29 17:46:35.247301 | orchestrator | Friday 29 August 2025 17:41:54 +0000 (0:00:04.833) 0:05:57.307 ********* 2025-08-29 17:46:35.247308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 17:46:35.247316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 17:46:35.247327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 17:46:35.247338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 17:46:35.247351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 17:46:35.247358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 17:46:35.247366 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247376 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247399 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 17:46:35.247419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.248132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.248147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.248170 | orchestrator | 2025-08-29 17:46:35.248178 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-08-29 17:46:35.248186 | orchestrator | Friday 29 August 2025 17:42:03 +0000 (0:00:08.863) 0:06:06.170 ********* 2025-08-29 17:46:35.248193 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.248200 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.248207 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.248214 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.248221 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.248227 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.248233 | orchestrator | 2025-08-29 17:46:35.248239 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-08-29 17:46:35.248246 | orchestrator | Friday 29 August 2025 17:42:05 +0000 (0:00:01.784) 0:06:07.954 ********* 2025-08-29 17:46:35.248252 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 17:46:35.248258 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 17:46:35.248264 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 17:46:35.248271 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 17:46:35.248283 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 17:46:35.248290 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 17:46:35.248297 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.248303 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 17:46:35.248309 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.248315 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 17:46:35.248321 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 17:46:35.248327 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.248334 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 17:46:35.248340 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 17:46:35.248347 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 17:46:35.248353 | orchestrator | 2025-08-29 17:46:35.248359 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-08-29 17:46:35.248365 | orchestrator | Friday 29 August 2025 17:42:10 +0000 (0:00:04.549) 0:06:12.504 ********* 2025-08-29 17:46:35.248372 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.248378 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.248384 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.248390 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.248396 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.248402 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.248408 | orchestrator | 2025-08-29 17:46:35.248414 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-08-29 17:46:35.248421 | orchestrator | Friday 29 August 2025 17:42:10 +0000 (0:00:00.669) 0:06:13.174 ********* 2025-08-29 17:46:35.248427 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 17:46:35.248433 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 17:46:35.248439 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 17:46:35.248450 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 17:46:35.248456 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 17:46:35.248462 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 17:46:35.248468 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 17:46:35.248486 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 17:46:35.248493 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 17:46:35.248502 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 17:46:35.248509 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.248515 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 17:46:35.248521 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.248527 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 17:46:35.248533 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 17:46:35.248539 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.248545 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 17:46:35.248551 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 17:46:35.248557 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 17:46:35.248563 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 17:46:35.248569 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 17:46:35.248575 | orchestrator | 2025-08-29 17:46:35.248581 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-08-29 17:46:35.248587 | orchestrator | Friday 29 August 2025 17:42:17 +0000 (0:00:06.316) 0:06:19.490 ********* 2025-08-29 17:46:35.248594 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 17:46:35.248600 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 17:46:35.248610 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 17:46:35.248617 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 17:46:35.248623 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 17:46:35.248629 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 17:46:35.248635 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 17:46:35.248641 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 17:46:35.248647 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 17:46:35.248654 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 17:46:35.248660 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 17:46:35.248666 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 17:46:35.249375 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 17:46:35.249385 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.249392 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 17:46:35.249399 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.249406 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 17:46:35.249413 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 17:46:35.249420 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.249427 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 17:46:35.249434 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 17:46:35.249441 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 17:46:35.249448 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 17:46:35.249455 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 17:46:35.249462 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 17:46:35.249469 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 17:46:35.249511 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 17:46:35.249518 | orchestrator | 2025-08-29 17:46:35.249525 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-08-29 17:46:35.249532 | orchestrator | Friday 29 August 2025 17:42:24 +0000 (0:00:07.517) 0:06:27.007 ********* 2025-08-29 17:46:35.249539 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.249546 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.249553 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.249560 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.249567 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.249573 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.249580 | orchestrator | 2025-08-29 17:46:35.249596 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-08-29 17:46:35.249603 | orchestrator | Friday 29 August 2025 17:42:25 +0000 (0:00:00.925) 0:06:27.932 ********* 2025-08-29 17:46:35.249610 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.249617 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.249624 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.249631 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.249638 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.249644 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.249651 | orchestrator | 2025-08-29 17:46:35.249657 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-08-29 17:46:35.249664 | orchestrator | Friday 29 August 2025 17:42:26 +0000 (0:00:00.633) 0:06:28.566 ********* 2025-08-29 17:46:35.249671 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.249678 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:46:35.249685 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.249691 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.249698 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:46:35.249704 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:46:35.249710 | orchestrator | 2025-08-29 17:46:35.249716 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-08-29 17:46:35.249722 | orchestrator | Friday 29 August 2025 17:42:28 +0000 (0:00:02.146) 0:06:30.713 ********* 2025-08-29 17:46:35.249736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 17:46:35.249749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 17:46:35.249756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.249763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 17:46:35.249773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 17:46:35.249780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.249790 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.249797 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.249808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 17:46:35.249815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 17:46:35.249821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.249828 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.249837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 17:46:35.249844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.249855 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.249861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 17:46:35.249873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.249880 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.249886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 17:46:35.249892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:46:35.249899 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.249905 | orchestrator | 2025-08-29 17:46:35.249911 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-08-29 17:46:35.249917 | orchestrator | Friday 29 August 2025 17:42:30 +0000 (0:00:01.694) 0:06:32.407 ********* 2025-08-29 17:46:35.249924 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-08-29 17:46:35.249930 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-08-29 17:46:35.249936 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.249942 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-08-29 17:46:35.249948 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-08-29 17:46:35.249954 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.249960 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-08-29 17:46:35.249966 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-08-29 17:46:35.249973 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.249978 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-08-29 17:46:35.249985 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-08-29 17:46:35.249991 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-08-29 17:46:35.249999 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-08-29 17:46:35.250009 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.250041 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.250047 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-08-29 17:46:35.250053 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-08-29 17:46:35.250058 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.250064 | orchestrator | 2025-08-29 17:46:35.250069 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-08-29 17:46:35.250074 | orchestrator | Friday 29 August 2025 17:42:31 +0000 (0:00:00.974) 0:06:33.382 ********* 2025-08-29 17:46:35.250080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 17:46:35.250091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 17:46:35.250098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 17:46:35.250104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 17:46:35.250113 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 17:46:35.250123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.250129 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 17:46:35.250139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.250145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 17:46:35.250151 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 17:46:35.250157 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 17:46:35.250171 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.250177 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.250186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.250192 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 17:46:35.250197 | orchestrator | 2025-08-29 17:46:35.250203 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 17:46:35.250208 | orchestrator | Friday 29 August 2025 17:42:35 +0000 (0:00:04.107) 0:06:37.490 ********* 2025-08-29 17:46:35.250214 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.250219 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.250225 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.250230 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.250235 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.250241 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.250246 | orchestrator | 2025-08-29 17:46:35.250251 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 17:46:35.250257 | orchestrator | Friday 29 August 2025 17:42:36 +0000 (0:00:01.045) 0:06:38.535 ********* 2025-08-29 17:46:35.250266 | orchestrator | 2025-08-29 17:46:35.250271 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 17:46:35.250277 | orchestrator | Friday 29 August 2025 17:42:36 +0000 (0:00:00.221) 0:06:38.757 ********* 2025-08-29 17:46:35.250282 | orchestrator | 2025-08-29 17:46:35.250288 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 17:46:35.250293 | orchestrator | Friday 29 August 2025 17:42:36 +0000 (0:00:00.195) 0:06:38.953 ********* 2025-08-29 17:46:35.250298 | orchestrator | 2025-08-29 17:46:35.250304 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 17:46:35.250309 | orchestrator | Friday 29 August 2025 17:42:36 +0000 (0:00:00.152) 0:06:39.105 ********* 2025-08-29 17:46:35.250315 | orchestrator | 2025-08-29 17:46:35.250320 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 17:46:35.250325 | orchestrator | Friday 29 August 2025 17:42:36 +0000 (0:00:00.166) 0:06:39.271 ********* 2025-08-29 17:46:35.250331 | orchestrator | 2025-08-29 17:46:35.250336 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 17:46:35.250342 | orchestrator | Friday 29 August 2025 17:42:37 +0000 (0:00:00.151) 0:06:39.423 ********* 2025-08-29 17:46:35.250347 | orchestrator | 2025-08-29 17:46:35.250352 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-08-29 17:46:35.250358 | orchestrator | Friday 29 August 2025 17:42:37 +0000 (0:00:00.508) 0:06:39.931 ********* 2025-08-29 17:46:35.250363 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.250371 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:46:35.250377 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:46:35.250382 | orchestrator | 2025-08-29 17:46:35.250388 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-08-29 17:46:35.250393 | orchestrator | Friday 29 August 2025 17:42:55 +0000 (0:00:18.001) 0:06:57.933 ********* 2025-08-29 17:46:35.250399 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.250404 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:46:35.250409 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:46:35.250415 | orchestrator | 2025-08-29 17:46:35.250420 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-08-29 17:46:35.250425 | orchestrator | Friday 29 August 2025 17:43:15 +0000 (0:00:20.389) 0:07:18.323 ********* 2025-08-29 17:46:35.250431 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:46:35.250436 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:46:35.250441 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:46:35.250447 | orchestrator | 2025-08-29 17:46:35.250452 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-08-29 17:46:35.250458 | orchestrator | Friday 29 August 2025 17:43:44 +0000 (0:00:28.744) 0:07:47.067 ********* 2025-08-29 17:46:35.250463 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:46:35.250468 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:46:35.250483 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:46:35.250488 | orchestrator | 2025-08-29 17:46:35.250493 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-08-29 17:46:35.250499 | orchestrator | Friday 29 August 2025 17:44:45 +0000 (0:01:00.414) 0:08:47.481 ********* 2025-08-29 17:46:35.250504 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:46:35.250509 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:46:35.250515 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:46:35.250520 | orchestrator | 2025-08-29 17:46:35.250525 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-08-29 17:46:35.250531 | orchestrator | Friday 29 August 2025 17:44:46 +0000 (0:00:00.921) 0:08:48.403 ********* 2025-08-29 17:46:35.250536 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:46:35.250541 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:46:35.250547 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:46:35.250552 | orchestrator | 2025-08-29 17:46:35.250557 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-08-29 17:46:35.250566 | orchestrator | Friday 29 August 2025 17:44:46 +0000 (0:00:00.803) 0:08:49.207 ********* 2025-08-29 17:46:35.250575 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:46:35.250580 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:46:35.250586 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:46:35.250591 | orchestrator | 2025-08-29 17:46:35.250596 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-08-29 17:46:35.250602 | orchestrator | Friday 29 August 2025 17:45:16 +0000 (0:00:30.084) 0:09:19.291 ********* 2025-08-29 17:46:35.250607 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.250612 | orchestrator | 2025-08-29 17:46:35.250618 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-08-29 17:46:35.250623 | orchestrator | Friday 29 August 2025 17:45:17 +0000 (0:00:00.168) 0:09:19.459 ********* 2025-08-29 17:46:35.250628 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.250634 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.250639 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.250644 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.250650 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.250656 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-08-29 17:46:35.250661 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:46:35.250667 | orchestrator | 2025-08-29 17:46:35.250672 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-08-29 17:46:35.250677 | orchestrator | Friday 29 August 2025 17:45:42 +0000 (0:00:24.933) 0:09:44.393 ********* 2025-08-29 17:46:35.250683 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.250688 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.250693 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.250698 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.250704 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.250709 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.250714 | orchestrator | 2025-08-29 17:46:35.250720 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-08-29 17:46:35.250725 | orchestrator | Friday 29 August 2025 17:45:54 +0000 (0:00:12.267) 0:09:56.660 ********* 2025-08-29 17:46:35.250730 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.250736 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.250741 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.250746 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.250751 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.250757 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-08-29 17:46:35.250762 | orchestrator | 2025-08-29 17:46:35.250768 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 17:46:35.250773 | orchestrator | Friday 29 August 2025 17:45:59 +0000 (0:00:04.829) 0:10:01.490 ********* 2025-08-29 17:46:35.250778 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:46:35.250784 | orchestrator | 2025-08-29 17:46:35.250789 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 17:46:35.250794 | orchestrator | Friday 29 August 2025 17:46:11 +0000 (0:00:12.263) 0:10:13.754 ********* 2025-08-29 17:46:35.250800 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:46:35.250805 | orchestrator | 2025-08-29 17:46:35.250811 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-08-29 17:46:35.250816 | orchestrator | Friday 29 August 2025 17:46:12 +0000 (0:00:01.469) 0:10:15.223 ********* 2025-08-29 17:46:35.250821 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.250827 | orchestrator | 2025-08-29 17:46:35.250832 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-08-29 17:46:35.250838 | orchestrator | Friday 29 August 2025 17:46:14 +0000 (0:00:01.442) 0:10:16.666 ********* 2025-08-29 17:46:35.250843 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:46:35.250854 | orchestrator | 2025-08-29 17:46:35.250859 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-08-29 17:46:35.250865 | orchestrator | Friday 29 August 2025 17:46:25 +0000 (0:00:10.750) 0:10:27.416 ********* 2025-08-29 17:46:35.250870 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:46:35.250876 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:46:35.250881 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:46:35.250886 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:46:35.250892 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:46:35.250897 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:46:35.250902 | orchestrator | 2025-08-29 17:46:35.250908 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-08-29 17:46:35.250913 | orchestrator | 2025-08-29 17:46:35.250947 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-08-29 17:46:35.250953 | orchestrator | Friday 29 August 2025 17:46:27 +0000 (0:00:02.015) 0:10:29.432 ********* 2025-08-29 17:46:35.250958 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:46:35.250963 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:46:35.250969 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:46:35.250974 | orchestrator | 2025-08-29 17:46:35.250979 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-08-29 17:46:35.250985 | orchestrator | 2025-08-29 17:46:35.250990 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-08-29 17:46:35.250995 | orchestrator | Friday 29 August 2025 17:46:28 +0000 (0:00:01.129) 0:10:30.561 ********* 2025-08-29 17:46:35.251001 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.251006 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.251011 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.251016 | orchestrator | 2025-08-29 17:46:35.251022 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-08-29 17:46:35.251027 | orchestrator | 2025-08-29 17:46:35.251033 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-08-29 17:46:35.251038 | orchestrator | Friday 29 August 2025 17:46:28 +0000 (0:00:00.595) 0:10:31.157 ********* 2025-08-29 17:46:35.251043 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-08-29 17:46:35.251052 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-08-29 17:46:35.251058 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-08-29 17:46:35.251064 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-08-29 17:46:35.251069 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-08-29 17:46:35.251074 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-08-29 17:46:35.251080 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:46:35.251085 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-08-29 17:46:35.251091 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-08-29 17:46:35.251096 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-08-29 17:46:35.251101 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-08-29 17:46:35.251107 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-08-29 17:46:35.251112 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-08-29 17:46:35.251118 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:46:35.251123 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-08-29 17:46:35.251128 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-08-29 17:46:35.251134 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-08-29 17:46:35.251139 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-08-29 17:46:35.251144 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-08-29 17:46:35.251150 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-08-29 17:46:35.251159 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:46:35.251164 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-08-29 17:46:35.251170 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-08-29 17:46:35.251175 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-08-29 17:46:35.251181 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-08-29 17:46:35.251186 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-08-29 17:46:35.251191 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-08-29 17:46:35.251197 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.251202 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-08-29 17:46:35.251207 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-08-29 17:46:35.251213 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-08-29 17:46:35.251218 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-08-29 17:46:35.251223 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-08-29 17:46:35.251229 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-08-29 17:46:35.251234 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.251239 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-08-29 17:46:35.251245 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-08-29 17:46:35.251250 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-08-29 17:46:35.251255 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-08-29 17:46:35.251261 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-08-29 17:46:35.251266 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-08-29 17:46:35.251271 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.251277 | orchestrator | 2025-08-29 17:46:35.251285 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-08-29 17:46:35.251290 | orchestrator | 2025-08-29 17:46:35.251296 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-08-29 17:46:35.251301 | orchestrator | Friday 29 August 2025 17:46:30 +0000 (0:00:01.666) 0:10:32.824 ********* 2025-08-29 17:46:35.251307 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-08-29 17:46:35.251312 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-08-29 17:46:35.251317 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.251323 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-08-29 17:46:35.251328 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-08-29 17:46:35.251333 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.251339 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-08-29 17:46:35.251344 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-08-29 17:46:35.251349 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.251354 | orchestrator | 2025-08-29 17:46:35.251360 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-08-29 17:46:35.251365 | orchestrator | 2025-08-29 17:46:35.251371 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-08-29 17:46:35.251376 | orchestrator | Friday 29 August 2025 17:46:31 +0000 (0:00:00.847) 0:10:33.672 ********* 2025-08-29 17:46:35.251381 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.251387 | orchestrator | 2025-08-29 17:46:35.251392 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-08-29 17:46:35.251398 | orchestrator | 2025-08-29 17:46:35.251403 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-08-29 17:46:35.251408 | orchestrator | Friday 29 August 2025 17:46:32 +0000 (0:00:00.760) 0:10:34.432 ********* 2025-08-29 17:46:35.251414 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:46:35.251419 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:46:35.251428 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:46:35.251433 | orchestrator | 2025-08-29 17:46:35.251439 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:46:35.251444 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:46:35.251453 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-08-29 17:46:35.251459 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-08-29 17:46:35.251464 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-08-29 17:46:35.251478 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 17:46:35.251484 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-08-29 17:46:35.251489 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-08-29 17:46:35.251495 | orchestrator | 2025-08-29 17:46:35.251500 | orchestrator | 2025-08-29 17:46:35.251506 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:46:35.251511 | orchestrator | Friday 29 August 2025 17:46:32 +0000 (0:00:00.488) 0:10:34.920 ********* 2025-08-29 17:46:35.251516 | orchestrator | =============================================================================== 2025-08-29 17:46:35.251522 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 60.41s 2025-08-29 17:46:35.251527 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 30.08s 2025-08-29 17:46:35.251533 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 28.98s 2025-08-29 17:46:35.251538 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 28.74s 2025-08-29 17:46:35.251543 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 25.24s 2025-08-29 17:46:35.251548 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 24.93s 2025-08-29 17:46:35.251554 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.86s 2025-08-29 17:46:35.251559 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 20.39s 2025-08-29 17:46:35.251564 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 18.00s 2025-08-29 17:46:35.251570 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.54s 2025-08-29 17:46:35.251575 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.03s 2025-08-29 17:46:35.251580 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.74s 2025-08-29 17:46:35.251586 | orchestrator | nova : Restart nova-api container -------------------------------------- 13.33s 2025-08-29 17:46:35.251591 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.81s 2025-08-29 17:46:35.251596 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.49s 2025-08-29 17:46:35.251602 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 12.27s 2025-08-29 17:46:35.251613 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.26s 2025-08-29 17:46:35.251619 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 10.91s 2025-08-29 17:46:35.251624 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.75s 2025-08-29 17:46:35.251630 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 8.86s 2025-08-29 17:46:35.251638 | orchestrator | 2025-08-29 17:46:35 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:35.251644 | orchestrator | 2025-08-29 17:46:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:38.262156 | orchestrator | 2025-08-29 17:46:38 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:38.262280 | orchestrator | 2025-08-29 17:46:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:41.304354 | orchestrator | 2025-08-29 17:46:41 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:41.304536 | orchestrator | 2025-08-29 17:46:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:44.338058 | orchestrator | 2025-08-29 17:46:44 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:44.338144 | orchestrator | 2025-08-29 17:46:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:47.375337 | orchestrator | 2025-08-29 17:46:47 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:47.375471 | orchestrator | 2025-08-29 17:46:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:50.411869 | orchestrator | 2025-08-29 17:46:50 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:50.411980 | orchestrator | 2025-08-29 17:46:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:53.449784 | orchestrator | 2025-08-29 17:46:53 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:53.449923 | orchestrator | 2025-08-29 17:46:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:56.490284 | orchestrator | 2025-08-29 17:46:56 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:56.490402 | orchestrator | 2025-08-29 17:46:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:59.531054 | orchestrator | 2025-08-29 17:46:59 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:46:59.531211 | orchestrator | 2025-08-29 17:46:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:02.566894 | orchestrator | 2025-08-29 17:47:02 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:02.567042 | orchestrator | 2025-08-29 17:47:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:05.602405 | orchestrator | 2025-08-29 17:47:05 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:05.602568 | orchestrator | 2025-08-29 17:47:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:08.638303 | orchestrator | 2025-08-29 17:47:08 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:08.638441 | orchestrator | 2025-08-29 17:47:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:11.672678 | orchestrator | 2025-08-29 17:47:11 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:11.672811 | orchestrator | 2025-08-29 17:47:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:14.713359 | orchestrator | 2025-08-29 17:47:14 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:14.713447 | orchestrator | 2025-08-29 17:47:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:17.750769 | orchestrator | 2025-08-29 17:47:17 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:17.750859 | orchestrator | 2025-08-29 17:47:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:20.776828 | orchestrator | 2025-08-29 17:47:20 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:20.776916 | orchestrator | 2025-08-29 17:47:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:23.815812 | orchestrator | 2025-08-29 17:47:23 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:23.815925 | orchestrator | 2025-08-29 17:47:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:26.865287 | orchestrator | 2025-08-29 17:47:26 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:26.865370 | orchestrator | 2025-08-29 17:47:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:29.912870 | orchestrator | 2025-08-29 17:47:29 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:29.912961 | orchestrator | 2025-08-29 17:47:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:32.961223 | orchestrator | 2025-08-29 17:47:32 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:32.961335 | orchestrator | 2025-08-29 17:47:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:36.005608 | orchestrator | 2025-08-29 17:47:36 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:36.005672 | orchestrator | 2025-08-29 17:47:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:39.038273 | orchestrator | 2025-08-29 17:47:39 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:39.038361 | orchestrator | 2025-08-29 17:47:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:42.084539 | orchestrator | 2025-08-29 17:47:42 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:42.084631 | orchestrator | 2025-08-29 17:47:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:45.122252 | orchestrator | 2025-08-29 17:47:45 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:45.122352 | orchestrator | 2025-08-29 17:47:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:48.159069 | orchestrator | 2025-08-29 17:47:48 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:48.159166 | orchestrator | 2025-08-29 17:47:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:51.198863 | orchestrator | 2025-08-29 17:47:51 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:51.198967 | orchestrator | 2025-08-29 17:47:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:54.238167 | orchestrator | 2025-08-29 17:47:54 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:54.238275 | orchestrator | 2025-08-29 17:47:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:57.270717 | orchestrator | 2025-08-29 17:47:57 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:47:57.271497 | orchestrator | 2025-08-29 17:47:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:00.315005 | orchestrator | 2025-08-29 17:48:00 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:00.315087 | orchestrator | 2025-08-29 17:48:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:03.372037 | orchestrator | 2025-08-29 17:48:03 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:03.372135 | orchestrator | 2025-08-29 17:48:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:06.404037 | orchestrator | 2025-08-29 17:48:06 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:06.404137 | orchestrator | 2025-08-29 17:48:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:09.454523 | orchestrator | 2025-08-29 17:48:09 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:09.454614 | orchestrator | 2025-08-29 17:48:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:12.504847 | orchestrator | 2025-08-29 17:48:12 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:12.504963 | orchestrator | 2025-08-29 17:48:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:15.552250 | orchestrator | 2025-08-29 17:48:15 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:15.552350 | orchestrator | 2025-08-29 17:48:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:18.594794 | orchestrator | 2025-08-29 17:48:18 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:18.594868 | orchestrator | 2025-08-29 17:48:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:21.639085 | orchestrator | 2025-08-29 17:48:21 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:21.639174 | orchestrator | 2025-08-29 17:48:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:24.693819 | orchestrator | 2025-08-29 17:48:24 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:24.693901 | orchestrator | 2025-08-29 17:48:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:27.736753 | orchestrator | 2025-08-29 17:48:27 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:27.736836 | orchestrator | 2025-08-29 17:48:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:30.777761 | orchestrator | 2025-08-29 17:48:30 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:30.777851 | orchestrator | 2025-08-29 17:48:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:33.818773 | orchestrator | 2025-08-29 17:48:33 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:33.818857 | orchestrator | 2025-08-29 17:48:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:36.859328 | orchestrator | 2025-08-29 17:48:36 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:36.859526 | orchestrator | 2025-08-29 17:48:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:39.905899 | orchestrator | 2025-08-29 17:48:39 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:39.905995 | orchestrator | 2025-08-29 17:48:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:42.942129 | orchestrator | 2025-08-29 17:48:42 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:42.942232 | orchestrator | 2025-08-29 17:48:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:45.986010 | orchestrator | 2025-08-29 17:48:45 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:45.986179 | orchestrator | 2025-08-29 17:48:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:49.031146 | orchestrator | 2025-08-29 17:48:49 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:49.031257 | orchestrator | 2025-08-29 17:48:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:52.073359 | orchestrator | 2025-08-29 17:48:52 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:52.073495 | orchestrator | 2025-08-29 17:48:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:55.120872 | orchestrator | 2025-08-29 17:48:55 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:55.122317 | orchestrator | 2025-08-29 17:48:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:58.167014 | orchestrator | 2025-08-29 17:48:58 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:48:58.167101 | orchestrator | 2025-08-29 17:48:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:01.209847 | orchestrator | 2025-08-29 17:49:01 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:49:01.209937 | orchestrator | 2025-08-29 17:49:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:04.253254 | orchestrator | 2025-08-29 17:49:04 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state STARTED 2025-08-29 17:49:04.253344 | orchestrator | 2025-08-29 17:49:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:07.298559 | orchestrator | 2025-08-29 17:49:07 | INFO  | Task 93f980c0-64ef-4050-a275-07b229f67e34 is in state SUCCESS 2025-08-29 17:49:07.300093 | orchestrator | 2025-08-29 17:49:07.300160 | orchestrator | 2025-08-29 17:49:07.300184 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:49:07.300204 | orchestrator | 2025-08-29 17:49:07.300934 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:49:07.300954 | orchestrator | Friday 29 August 2025 17:43:58 +0000 (0:00:00.291) 0:00:00.291 ********* 2025-08-29 17:49:07.300964 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:49:07.300975 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:49:07.300985 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:49:07.300995 | orchestrator | 2025-08-29 17:49:07.301004 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:49:07.301014 | orchestrator | Friday 29 August 2025 17:43:59 +0000 (0:00:00.433) 0:00:00.724 ********* 2025-08-29 17:49:07.301025 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-08-29 17:49:07.301035 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-08-29 17:49:07.301045 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-08-29 17:49:07.301054 | orchestrator | 2025-08-29 17:49:07.301064 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-08-29 17:49:07.301074 | orchestrator | 2025-08-29 17:49:07.301084 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 17:49:07.301093 | orchestrator | Friday 29 August 2025 17:43:59 +0000 (0:00:00.623) 0:00:01.348 ********* 2025-08-29 17:49:07.301103 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:49:07.301114 | orchestrator | 2025-08-29 17:49:07.301123 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-08-29 17:49:07.301133 | orchestrator | Friday 29 August 2025 17:44:00 +0000 (0:00:00.746) 0:00:02.094 ********* 2025-08-29 17:49:07.301143 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-08-29 17:49:07.301153 | orchestrator | 2025-08-29 17:49:07.301163 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-08-29 17:49:07.301172 | orchestrator | Friday 29 August 2025 17:44:04 +0000 (0:00:03.608) 0:00:05.703 ********* 2025-08-29 17:49:07.301182 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-08-29 17:49:07.301192 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-08-29 17:49:07.301222 | orchestrator | 2025-08-29 17:49:07.301232 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-08-29 17:49:07.301242 | orchestrator | Friday 29 August 2025 17:44:10 +0000 (0:00:06.465) 0:00:12.168 ********* 2025-08-29 17:49:07.301271 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 17:49:07.301282 | orchestrator | 2025-08-29 17:49:07.301291 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-08-29 17:49:07.301301 | orchestrator | Friday 29 August 2025 17:44:13 +0000 (0:00:03.236) 0:00:15.405 ********* 2025-08-29 17:49:07.301311 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 17:49:07.301320 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-08-29 17:49:07.301330 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-08-29 17:49:07.301339 | orchestrator | 2025-08-29 17:49:07.301349 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-08-29 17:49:07.301358 | orchestrator | Friday 29 August 2025 17:44:22 +0000 (0:00:08.189) 0:00:23.595 ********* 2025-08-29 17:49:07.301368 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 17:49:07.301377 | orchestrator | 2025-08-29 17:49:07.301387 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-08-29 17:49:07.301396 | orchestrator | Friday 29 August 2025 17:44:25 +0000 (0:00:03.209) 0:00:26.804 ********* 2025-08-29 17:49:07.301406 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-08-29 17:49:07.301415 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-08-29 17:49:07.301425 | orchestrator | 2025-08-29 17:49:07.301434 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-08-29 17:49:07.301443 | orchestrator | Friday 29 August 2025 17:44:33 +0000 (0:00:07.683) 0:00:34.488 ********* 2025-08-29 17:49:07.301453 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-08-29 17:49:07.301482 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-08-29 17:49:07.301492 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-08-29 17:49:07.301502 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-08-29 17:49:07.301512 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-08-29 17:49:07.301521 | orchestrator | 2025-08-29 17:49:07.301530 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 17:49:07.301540 | orchestrator | Friday 29 August 2025 17:44:48 +0000 (0:00:15.494) 0:00:49.983 ********* 2025-08-29 17:49:07.301549 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:49:07.301559 | orchestrator | 2025-08-29 17:49:07.301569 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-08-29 17:49:07.301578 | orchestrator | Friday 29 August 2025 17:44:49 +0000 (0:00:00.908) 0:00:50.892 ********* 2025-08-29 17:49:07.301588 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.301597 | orchestrator | 2025-08-29 17:49:07.301607 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-08-29 17:49:07.301616 | orchestrator | Friday 29 August 2025 17:44:53 +0000 (0:00:04.565) 0:00:55.457 ********* 2025-08-29 17:49:07.301626 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.301635 | orchestrator | 2025-08-29 17:49:07.301645 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-08-29 17:49:07.301698 | orchestrator | Friday 29 August 2025 17:44:59 +0000 (0:00:05.189) 0:01:00.647 ********* 2025-08-29 17:49:07.301711 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:49:07.301721 | orchestrator | 2025-08-29 17:49:07.301731 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-08-29 17:49:07.301741 | orchestrator | Friday 29 August 2025 17:45:02 +0000 (0:00:03.310) 0:01:03.957 ********* 2025-08-29 17:49:07.301750 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-08-29 17:49:07.301760 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-08-29 17:49:07.301778 | orchestrator | 2025-08-29 17:49:07.301788 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-08-29 17:49:07.301798 | orchestrator | Friday 29 August 2025 17:45:13 +0000 (0:00:11.299) 0:01:15.257 ********* 2025-08-29 17:49:07.301808 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-08-29 17:49:07.301818 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-08-29 17:49:07.301829 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-08-29 17:49:07.301839 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-08-29 17:49:07.301849 | orchestrator | 2025-08-29 17:49:07.301860 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-08-29 17:49:07.301869 | orchestrator | Friday 29 August 2025 17:45:29 +0000 (0:00:15.822) 0:01:31.079 ********* 2025-08-29 17:49:07.301879 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.301889 | orchestrator | 2025-08-29 17:49:07.301899 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-08-29 17:49:07.301908 | orchestrator | Friday 29 August 2025 17:45:35 +0000 (0:00:05.515) 0:01:36.595 ********* 2025-08-29 17:49:07.301918 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.301928 | orchestrator | 2025-08-29 17:49:07.301937 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-08-29 17:49:07.301947 | orchestrator | Friday 29 August 2025 17:45:40 +0000 (0:00:05.204) 0:01:41.799 ********* 2025-08-29 17:49:07.301957 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:49:07.301967 | orchestrator | 2025-08-29 17:49:07.301977 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-08-29 17:49:07.301987 | orchestrator | Friday 29 August 2025 17:45:40 +0000 (0:00:00.246) 0:01:42.045 ********* 2025-08-29 17:49:07.301996 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.302006 | orchestrator | 2025-08-29 17:49:07.302073 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 17:49:07.302087 | orchestrator | Friday 29 August 2025 17:45:45 +0000 (0:00:05.238) 0:01:47.284 ********* 2025-08-29 17:49:07.302097 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:49:07.302107 | orchestrator | 2025-08-29 17:49:07.302116 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-08-29 17:49:07.302125 | orchestrator | Friday 29 August 2025 17:45:48 +0000 (0:00:02.472) 0:01:49.756 ********* 2025-08-29 17:49:07.302135 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:07.302145 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.302154 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:07.302164 | orchestrator | 2025-08-29 17:49:07.302173 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-08-29 17:49:07.302183 | orchestrator | Friday 29 August 2025 17:45:53 +0000 (0:00:05.361) 0:01:55.117 ********* 2025-08-29 17:49:07.302193 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.302202 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:07.302212 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:07.302221 | orchestrator | 2025-08-29 17:49:07.302231 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-08-29 17:49:07.302240 | orchestrator | Friday 29 August 2025 17:45:57 +0000 (0:00:04.113) 0:01:59.231 ********* 2025-08-29 17:49:07.302250 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.302260 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:07.302269 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:07.302278 | orchestrator | 2025-08-29 17:49:07.302288 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-08-29 17:49:07.302312 | orchestrator | Friday 29 August 2025 17:45:58 +0000 (0:00:00.935) 0:02:00.166 ********* 2025-08-29 17:49:07.302329 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:49:07.302345 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:49:07.302361 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:49:07.302377 | orchestrator | 2025-08-29 17:49:07.302387 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-08-29 17:49:07.302397 | orchestrator | Friday 29 August 2025 17:46:00 +0000 (0:00:02.035) 0:02:02.202 ********* 2025-08-29 17:49:07.302406 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:07.302416 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:07.302425 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.302435 | orchestrator | 2025-08-29 17:49:07.302445 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-08-29 17:49:07.302454 | orchestrator | Friday 29 August 2025 17:46:02 +0000 (0:00:01.362) 0:02:03.564 ********* 2025-08-29 17:49:07.302482 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.302491 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:07.302501 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:07.302510 | orchestrator | 2025-08-29 17:49:07.302520 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-08-29 17:49:07.302529 | orchestrator | Friday 29 August 2025 17:46:03 +0000 (0:00:01.143) 0:02:04.707 ********* 2025-08-29 17:49:07.302539 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:07.302549 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:07.302558 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.302568 | orchestrator | 2025-08-29 17:49:07.302618 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-08-29 17:49:07.302630 | orchestrator | Friday 29 August 2025 17:46:05 +0000 (0:00:01.984) 0:02:06.692 ********* 2025-08-29 17:49:07.302640 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:07.302650 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.302659 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:07.302669 | orchestrator | 2025-08-29 17:49:07.302678 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-08-29 17:49:07.302688 | orchestrator | Friday 29 August 2025 17:46:06 +0000 (0:00:01.502) 0:02:08.194 ********* 2025-08-29 17:49:07.302698 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:49:07.302707 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:49:07.302717 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:49:07.302726 | orchestrator | 2025-08-29 17:49:07.302736 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-08-29 17:49:07.302746 | orchestrator | Friday 29 August 2025 17:46:07 +0000 (0:00:00.899) 0:02:09.094 ********* 2025-08-29 17:49:07.302755 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:49:07.302765 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:49:07.302774 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:49:07.302783 | orchestrator | 2025-08-29 17:49:07.302793 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 17:49:07.302803 | orchestrator | Friday 29 August 2025 17:46:10 +0000 (0:00:02.926) 0:02:12.020 ********* 2025-08-29 17:49:07.302812 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:49:07.302822 | orchestrator | 2025-08-29 17:49:07.302832 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-08-29 17:49:07.302842 | orchestrator | Friday 29 August 2025 17:46:11 +0000 (0:00:00.547) 0:02:12.567 ********* 2025-08-29 17:49:07.302851 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:49:07.302861 | orchestrator | 2025-08-29 17:49:07.302870 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-08-29 17:49:07.302880 | orchestrator | Friday 29 August 2025 17:46:15 +0000 (0:00:04.399) 0:02:16.967 ********* 2025-08-29 17:49:07.302889 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:49:07.302899 | orchestrator | 2025-08-29 17:49:07.302908 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-08-29 17:49:07.302926 | orchestrator | Friday 29 August 2025 17:46:18 +0000 (0:00:03.075) 0:02:20.042 ********* 2025-08-29 17:49:07.302936 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-08-29 17:49:07.302946 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-08-29 17:49:07.302955 | orchestrator | 2025-08-29 17:49:07.302965 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-08-29 17:49:07.302975 | orchestrator | Friday 29 August 2025 17:46:25 +0000 (0:00:06.630) 0:02:26.673 ********* 2025-08-29 17:49:07.302984 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:49:07.302994 | orchestrator | 2025-08-29 17:49:07.303004 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-08-29 17:49:07.303013 | orchestrator | Friday 29 August 2025 17:46:28 +0000 (0:00:03.232) 0:02:29.905 ********* 2025-08-29 17:49:07.303023 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:49:07.303032 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:49:07.303042 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:49:07.303051 | orchestrator | 2025-08-29 17:49:07.303061 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-08-29 17:49:07.303070 | orchestrator | Friday 29 August 2025 17:46:28 +0000 (0:00:00.385) 0:02:30.290 ********* 2025-08-29 17:49:07.303083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:49:07.303133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:49:07.303154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:49:07.303182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 17:49:07.303199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 17:49:07.303216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 17:49:07.303234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.303253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.303309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.303322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.303340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.303351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.303361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:49:07.303371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:49:07.303382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:49:07.303392 | orchestrator | 2025-08-29 17:49:07.303402 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-08-29 17:49:07.303412 | orchestrator | Friday 29 August 2025 17:46:31 +0000 (0:00:02.611) 0:02:32.901 ********* 2025-08-29 17:49:07.303422 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:49:07.303432 | orchestrator | 2025-08-29 17:49:07.303490 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-08-29 17:49:07.303504 | orchestrator | Friday 29 August 2025 17:46:31 +0000 (0:00:00.133) 0:02:33.035 ********* 2025-08-29 17:49:07.303513 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:49:07.303523 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:49:07.303533 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:49:07.303548 | orchestrator | 2025-08-29 17:49:07.303558 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-08-29 17:49:07.303569 | orchestrator | Friday 29 August 2025 17:46:32 +0000 (0:00:00.576) 0:02:33.611 ********* 2025-08-29 17:49:07.303586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 17:49:07.303605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:49:07.303622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.303632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.303643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:49:07.303653 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:49:07.303694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 17:49:07.303713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:49:07.303724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.303735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.303745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:49:07.303755 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:49:07.303770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 17:49:07.303834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:49:07.303870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.303888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.303907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:49:07.303924 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:49:07.303938 | orchestrator | 2025-08-29 17:49:07.303948 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 17:49:07.303957 | orchestrator | Friday 29 August 2025 17:46:32 +0000 (0:00:00.779) 0:02:34.390 ********* 2025-08-29 17:49:07.303967 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:49:07.303977 | orchestrator | 2025-08-29 17:49:07.303986 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-08-29 17:49:07.303996 | orchestrator | Friday 29 August 2025 17:46:33 +0000 (0:00:00.662) 0:02:35.053 ********* 2025-08-29 17:49:07.304006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:49:07.304058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:49:07.304071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:49:07.304082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 17:49:07.304092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 17:49:07.304102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 17:49:07.304112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.304138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.304172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.304195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.304205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.304215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.304225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:49:07.304236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:49:07.304256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:49:07.304266 | orchestrator | 2025-08-29 17:49:07.304276 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-08-29 17:49:07.304286 | orchestrator | Friday 29 August 2025 17:46:38 +0000 (0:00:05.231) 0:02:40.284 ********* 2025-08-29 17:49:07.304296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 17:49:07.304317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:49:07.304328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.304338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.304348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:49:07.304363 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:49:07.304381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 17:49:07.304392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:49:07.304406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.304416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.304426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:49:07.304436 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:49:07.304451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 17:49:07.304535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:49:07.304549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.304564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.304575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:49:07.304586 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:49:07.304596 | orchestrator | 2025-08-29 17:49:07.304606 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-08-29 17:49:07.304616 | orchestrator | Friday 29 August 2025 17:46:39 +0000 (0:00:01.071) 0:02:41.356 ********* 2025-08-29 17:49:07.304627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 17:49:07.304641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:49:07.304650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.304664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.304677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:49:07.304685 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:49:07.304694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 17:49:07.304703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:49:07.304716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.304725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.304739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:49:07.304748 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:49:07.304760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 17:49:07.304769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:49:07.304778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.304791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:49:07.304800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:49:07.304808 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:49:07.304817 | orchestrator | 2025-08-29 17:49:07.304825 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-08-29 17:49:07.304833 | orchestrator | Friday 29 August 2025 17:46:40 +0000 (0:00:01.101) 0:02:42.457 ********* 2025-08-29 17:49:07.304849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:49:07.304861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:49:07.304870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:49:07.304884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 17:49:07.304893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 17:49:07.304903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 17:49:07.304924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.304943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.304957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.304979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.304994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305078 | orchestrator | 2025-08-29 17:49:07.305095 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-08-29 17:49:07.305103 | orchestrator | Friday 29 August 2025 17:46:45 +0000 (0:00:04.914) 0:02:47.371 ********* 2025-08-29 17:49:07.305111 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-08-29 17:49:07.305119 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-08-29 17:49:07.305127 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-08-29 17:49:07.305135 | orchestrator | 2025-08-29 17:49:07.305143 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-08-29 17:49:07.305150 | orchestrator | Friday 29 August 2025 17:46:48 +0000 (0:00:02.352) 0:02:49.724 ********* 2025-08-29 17:49:07.305159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:49:07.305167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:49:07.305182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:49:07.305194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 17:49:07.305207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 17:49:07.305215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 17:49:07.305223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305309 | orchestrator | 2025-08-29 17:49:07.305317 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-08-29 17:49:07.305325 | orchestrator | Friday 29 August 2025 17:47:06 +0000 (0:00:18.250) 0:03:07.974 ********* 2025-08-29 17:49:07.305333 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.305341 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:07.305349 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:07.305357 | orchestrator | 2025-08-29 17:49:07.305365 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-08-29 17:49:07.305373 | orchestrator | Friday 29 August 2025 17:47:08 +0000 (0:00:01.618) 0:03:09.593 ********* 2025-08-29 17:49:07.305381 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-08-29 17:49:07.305389 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-08-29 17:49:07.305401 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-08-29 17:49:07.305409 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-08-29 17:49:07.305417 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-08-29 17:49:07.305425 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-08-29 17:49:07.305437 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-08-29 17:49:07.305445 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-08-29 17:49:07.305453 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-08-29 17:49:07.305475 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-08-29 17:49:07.305483 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-08-29 17:49:07.305491 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-08-29 17:49:07.305499 | orchestrator | 2025-08-29 17:49:07.305506 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-08-29 17:49:07.305514 | orchestrator | Friday 29 August 2025 17:47:13 +0000 (0:00:05.355) 0:03:14.948 ********* 2025-08-29 17:49:07.305522 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-08-29 17:49:07.305530 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-08-29 17:49:07.305538 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-08-29 17:49:07.305545 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-08-29 17:49:07.305556 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-08-29 17:49:07.305564 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-08-29 17:49:07.305572 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-08-29 17:49:07.305580 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-08-29 17:49:07.305588 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-08-29 17:49:07.305596 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-08-29 17:49:07.305604 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-08-29 17:49:07.305612 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-08-29 17:49:07.305619 | orchestrator | 2025-08-29 17:49:07.305627 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-08-29 17:49:07.305635 | orchestrator | Friday 29 August 2025 17:47:18 +0000 (0:00:05.431) 0:03:20.380 ********* 2025-08-29 17:49:07.305643 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-08-29 17:49:07.305651 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-08-29 17:49:07.305659 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-08-29 17:49:07.305666 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-08-29 17:49:07.305674 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-08-29 17:49:07.305682 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-08-29 17:49:07.305690 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-08-29 17:49:07.305697 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-08-29 17:49:07.305705 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-08-29 17:49:07.305713 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-08-29 17:49:07.305721 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-08-29 17:49:07.305728 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-08-29 17:49:07.305736 | orchestrator | 2025-08-29 17:49:07.305744 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-08-29 17:49:07.305752 | orchestrator | Friday 29 August 2025 17:47:24 +0000 (0:00:05.223) 0:03:25.604 ********* 2025-08-29 17:49:07.305760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:49:07.305779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:49:07.305791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:49:07.305800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 17:49:07.305808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 17:49:07.305816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 17:49:07.305825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 17:49:07.305921 | orchestrator | 2025-08-29 17:49:07.305929 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 17:49:07.305937 | orchestrator | Friday 29 August 2025 17:47:27 +0000 (0:00:03.630) 0:03:29.234 ********* 2025-08-29 17:49:07.305945 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:49:07.305953 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:49:07.305960 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:49:07.305968 | orchestrator | 2025-08-29 17:49:07.305976 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-08-29 17:49:07.305984 | orchestrator | Friday 29 August 2025 17:47:28 +0000 (0:00:00.354) 0:03:29.589 ********* 2025-08-29 17:49:07.305992 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.305999 | orchestrator | 2025-08-29 17:49:07.306007 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-08-29 17:49:07.306046 | orchestrator | Friday 29 August 2025 17:47:30 +0000 (0:00:02.236) 0:03:31.826 ********* 2025-08-29 17:49:07.306056 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.306064 | orchestrator | 2025-08-29 17:49:07.306072 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-08-29 17:49:07.306080 | orchestrator | Friday 29 August 2025 17:47:32 +0000 (0:00:02.172) 0:03:33.999 ********* 2025-08-29 17:49:07.306088 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.306096 | orchestrator | 2025-08-29 17:49:07.306108 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-08-29 17:49:07.306116 | orchestrator | Friday 29 August 2025 17:47:34 +0000 (0:00:02.269) 0:03:36.269 ********* 2025-08-29 17:49:07.306124 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.306132 | orchestrator | 2025-08-29 17:49:07.306140 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-08-29 17:49:07.306147 | orchestrator | Friday 29 August 2025 17:47:36 +0000 (0:00:02.180) 0:03:38.449 ********* 2025-08-29 17:49:07.306155 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.306163 | orchestrator | 2025-08-29 17:49:07.306170 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-08-29 17:49:07.306178 | orchestrator | Friday 29 August 2025 17:47:59 +0000 (0:00:22.771) 0:04:01.221 ********* 2025-08-29 17:49:07.306186 | orchestrator | 2025-08-29 17:49:07.306194 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-08-29 17:49:07.306202 | orchestrator | Friday 29 August 2025 17:47:59 +0000 (0:00:00.094) 0:04:01.315 ********* 2025-08-29 17:49:07.306209 | orchestrator | 2025-08-29 17:49:07.306222 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-08-29 17:49:07.306230 | orchestrator | Friday 29 August 2025 17:47:59 +0000 (0:00:00.088) 0:04:01.403 ********* 2025-08-29 17:49:07.306238 | orchestrator | 2025-08-29 17:49:07.306246 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-08-29 17:49:07.306253 | orchestrator | Friday 29 August 2025 17:48:00 +0000 (0:00:00.083) 0:04:01.487 ********* 2025-08-29 17:49:07.306261 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.306269 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:07.306277 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:07.306284 | orchestrator | 2025-08-29 17:49:07.306292 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-08-29 17:49:07.306300 | orchestrator | Friday 29 August 2025 17:48:20 +0000 (0:00:20.899) 0:04:22.386 ********* 2025-08-29 17:49:07.306308 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.306316 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:07.306323 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:07.306337 | orchestrator | 2025-08-29 17:49:07.306351 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-08-29 17:49:07.306364 | orchestrator | Friday 29 August 2025 17:48:32 +0000 (0:00:11.905) 0:04:34.292 ********* 2025-08-29 17:49:07.306377 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.306390 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:07.306402 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:07.306415 | orchestrator | 2025-08-29 17:49:07.306428 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-08-29 17:49:07.306441 | orchestrator | Friday 29 August 2025 17:48:41 +0000 (0:00:08.246) 0:04:42.538 ********* 2025-08-29 17:49:07.306454 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.306484 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:07.306498 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:07.306511 | orchestrator | 2025-08-29 17:49:07.306526 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-08-29 17:49:07.306540 | orchestrator | Friday 29 August 2025 17:48:53 +0000 (0:00:12.070) 0:04:54.608 ********* 2025-08-29 17:49:07.306553 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:07.306572 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:07.306586 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:07.306599 | orchestrator | 2025-08-29 17:49:07.306613 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:49:07.306628 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 17:49:07.306641 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 17:49:07.306656 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 17:49:07.306670 | orchestrator | 2025-08-29 17:49:07.306683 | orchestrator | 2025-08-29 17:49:07.306696 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:49:07.306709 | orchestrator | Friday 29 August 2025 17:49:05 +0000 (0:00:12.776) 0:05:07.385 ********* 2025-08-29 17:49:07.306732 | orchestrator | =============================================================================== 2025-08-29 17:49:07.306746 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.77s 2025-08-29 17:49:07.306761 | orchestrator | octavia : Restart octavia-api container -------------------------------- 20.90s 2025-08-29 17:49:07.306774 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.25s 2025-08-29 17:49:07.306789 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.82s 2025-08-29 17:49:07.306802 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.49s 2025-08-29 17:49:07.306824 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 12.78s 2025-08-29 17:49:07.306838 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 12.07s 2025-08-29 17:49:07.306851 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.91s 2025-08-29 17:49:07.306865 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.30s 2025-08-29 17:49:07.306878 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.25s 2025-08-29 17:49:07.306892 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.19s 2025-08-29 17:49:07.306905 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.68s 2025-08-29 17:49:07.306919 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.63s 2025-08-29 17:49:07.306942 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.47s 2025-08-29 17:49:07.306955 | orchestrator | octavia : Create loadbalancer management network ------------------------ 5.52s 2025-08-29 17:49:07.306969 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.43s 2025-08-29 17:49:07.306983 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.36s 2025-08-29 17:49:07.306996 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.36s 2025-08-29 17:49:07.307009 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.24s 2025-08-29 17:49:07.307022 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.23s 2025-08-29 17:49:07.307035 | orchestrator | 2025-08-29 17:49:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:10.335386 | orchestrator | 2025-08-29 17:49:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:13.371390 | orchestrator | 2025-08-29 17:49:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:16.415728 | orchestrator | 2025-08-29 17:49:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:19.447861 | orchestrator | 2025-08-29 17:49:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:22.493726 | orchestrator | 2025-08-29 17:49:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:25.531024 | orchestrator | 2025-08-29 17:49:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:28.574264 | orchestrator | 2025-08-29 17:49:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:31.612102 | orchestrator | 2025-08-29 17:49:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:34.650586 | orchestrator | 2025-08-29 17:49:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:37.694831 | orchestrator | 2025-08-29 17:49:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:40.738827 | orchestrator | 2025-08-29 17:49:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:43.775544 | orchestrator | 2025-08-29 17:49:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:46.808192 | orchestrator | 2025-08-29 17:49:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:49.840978 | orchestrator | 2025-08-29 17:49:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:52.870355 | orchestrator | 2025-08-29 17:49:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:55.906501 | orchestrator | 2025-08-29 17:49:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:49:58.942627 | orchestrator | 2025-08-29 17:49:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:50:01.970552 | orchestrator | 2025-08-29 17:50:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:50:05.004686 | orchestrator | 2025-08-29 17:50:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 17:50:08.040088 | orchestrator | 2025-08-29 17:50:08.467795 | orchestrator | 2025-08-29 17:50:08.474050 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Aug 29 17:50:08 UTC 2025 2025-08-29 17:50:08.474112 | orchestrator | 2025-08-29 17:50:08.878047 | orchestrator | ok: Runtime: 0:36:39.128790 2025-08-29 17:50:09.130675 | 2025-08-29 17:50:09.130893 | TASK [Bootstrap services] 2025-08-29 17:50:09.881880 | orchestrator | 2025-08-29 17:50:09.882144 | orchestrator | # BOOTSTRAP 2025-08-29 17:50:09.882168 | orchestrator | 2025-08-29 17:50:09.882182 | orchestrator | + set -e 2025-08-29 17:50:09.882195 | orchestrator | + echo 2025-08-29 17:50:09.882209 | orchestrator | + echo '# BOOTSTRAP' 2025-08-29 17:50:09.882227 | orchestrator | + echo 2025-08-29 17:50:09.882275 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-08-29 17:50:09.888959 | orchestrator | + set -e 2025-08-29 17:50:09.889018 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-08-29 17:50:15.212992 | orchestrator | 2025-08-29 17:50:15 | INFO  | It takes a moment until task c051d5c2-b859-4b0f-bad2-7342c5c4a496 (flavor-manager) has been started and output is visible here. 2025-08-29 17:50:19.592168 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-08-29 17:50:19.593006 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:179 │ 2025-08-29 17:50:19.593065 | orchestrator | │ in run │ 2025-08-29 17:50:19.593081 | orchestrator | │ │ 2025-08-29 17:50:19.593095 | orchestrator | │ 176 │ logger.add(sys.stderr, format=log_fmt, level=level, colorize=True) │ 2025-08-29 17:50:19.593121 | orchestrator | │ 177 │ │ 2025-08-29 17:50:19.593135 | orchestrator | │ 178 │ definitions = get_flavor_definitions(name, url) │ 2025-08-29 17:50:19.593150 | orchestrator | │ ❱ 179 │ manager = FlavorManager( │ 2025-08-29 17:50:19.593162 | orchestrator | │ 180 │ │ cloud=Cloud(cloud), definitions=definitions, recommended=recom │ 2025-08-29 17:50:19.593175 | orchestrator | │ 181 │ ) │ 2025-08-29 17:50:19.593187 | orchestrator | │ 182 │ manager.run() │ 2025-08-29 17:50:19.593199 | orchestrator | │ │ 2025-08-29 17:50:19.593212 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-08-29 17:50:19.593249 | orchestrator | │ │ cloud = 'admin' │ │ 2025-08-29 17:50:19.593261 | orchestrator | │ │ debug = False │ │ 2025-08-29 17:50:19.593272 | orchestrator | │ │ definitions = { │ │ 2025-08-29 17:50:19.593283 | orchestrator | │ │ │ 'reference': [ │ │ 2025-08-29 17:50:19.593294 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-08-29 17:50:19.593305 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-08-29 17:50:19.593316 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-08-29 17:50:19.593327 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-08-29 17:50:19.593338 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-08-29 17:50:19.593349 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-08-29 17:50:19.593360 | orchestrator | │ │ │ ], │ │ 2025-08-29 17:50:19.593371 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-08-29 17:50:19.593382 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.593393 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-08-29 17:50:19.593440 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 17:50:19.593479 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-08-29 17:50:19.593491 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 17:50:19.593502 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-08-29 17:50:19.593513 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.593524 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-08-29 17:50:19.593534 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-08-29 17:50:19.593545 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.593556 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.593567 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.593578 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-08-29 17:50:19.593589 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 17:50:19.593599 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-08-29 17:50:19.593610 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-08-29 17:50:19.593621 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-08-29 17:50:19.593650 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.593662 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-08-29 17:50:19.593673 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-08-29 17:50:19.593683 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.593694 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.593705 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.593716 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-08-29 17:50:19.593732 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 17:50:19.593743 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-08-29 17:50:19.593754 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 17:50:19.593765 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 17:50:19.593776 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.593787 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-08-29 17:50:19.593797 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-08-29 17:50:19.593808 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.593819 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.593830 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.593841 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-08-29 17:50:19.593852 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 17:50:19.593872 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-08-29 17:50:19.593883 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-08-29 17:50:19.593894 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 17:50:19.593905 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.593915 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-08-29 17:50:19.593926 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-08-29 17:50:19.593937 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.593948 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.593958 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.593969 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-08-29 17:50:19.593980 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 17:50:19.593990 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 17:50:19.594001 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 17:50:19.594012 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 17:50:19.594073 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.594085 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-08-29 17:50:19.594095 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-08-29 17:50:19.594106 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.594117 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.594128 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.594139 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-08-29 17:50:19.594150 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 17:50:19.594184 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 17:50:19.594196 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-08-29 17:50:19.594216 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 17:50:19.637546 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.637612 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-08-29 17:50:19.637623 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-08-29 17:50:19.637632 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.637642 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.637651 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.637661 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-08-29 17:50:19.637671 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 17:50:19.637724 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-08-29 17:50:19.637737 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 17:50:19.637747 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 17:50:19.637757 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.637767 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-08-29 17:50:19.637776 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-08-29 17:50:19.637786 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.637795 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.637805 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.637815 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-08-29 17:50:19.637824 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 17:50:19.637834 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-08-29 17:50:19.637843 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-08-29 17:50:19.637853 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 17:50:19.637862 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.637872 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-08-29 17:50:19.637881 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-08-29 17:50:19.637891 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.637901 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.637910 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.637920 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-08-29 17:50:19.637929 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-08-29 17:50:19.637943 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 17:50:19.637953 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 17:50:19.637964 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 17:50:19.637975 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.637985 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-08-29 17:50:19.637996 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-08-29 17:50:19.638007 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.638064 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.638078 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.638089 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-08-29 17:50:19.638100 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-08-29 17:50:19.638117 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 17:50:19.638128 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-08-29 17:50:19.638155 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 17:50:19.638166 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.638177 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-08-29 17:50:19.638188 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-08-29 17:50:19.638199 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.638210 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.638221 | orchestrator | │ │ │ │ ... +19 │ │ 2025-08-29 17:50:19.638232 | orchestrator | │ │ │ ] │ │ 2025-08-29 17:50:19.638243 | orchestrator | │ │ } │ │ 2025-08-29 17:50:19.638254 | orchestrator | │ │ level = 'INFO' │ │ 2025-08-29 17:50:19.638264 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-08-29 17:50:19.638275 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-08-29 17:50:19.638286 | orchestrator | │ │ name = 'local' │ │ 2025-08-29 17:50:19.638297 | orchestrator | │ │ recommended = True │ │ 2025-08-29 17:50:19.638308 | orchestrator | │ │ url = None │ │ 2025-08-29 17:50:19.638319 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-08-29 17:50:19.638332 | orchestrator | │ │ 2025-08-29 17:50:19.638343 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:97 │ 2025-08-29 17:50:19.638354 | orchestrator | │ in __init__ │ 2025-08-29 17:50:19.638364 | orchestrator | │ │ 2025-08-29 17:50:19.638375 | orchestrator | │ 94 │ │ self.required_flavors = definitions["mandatory"] │ 2025-08-29 17:50:19.638386 | orchestrator | │ 95 │ │ self.cloud = cloud │ 2025-08-29 17:50:19.638397 | orchestrator | │ 96 │ │ if recommended: │ 2025-08-29 17:50:19.638407 | orchestrator | │ ❱ 97 │ │ │ self.required_flavors = self.required_flavors + definition │ 2025-08-29 17:50:19.638418 | orchestrator | │ 98 │ │ │ 2025-08-29 17:50:19.638429 | orchestrator | │ 99 │ │ self.defaults_dict = {} │ 2025-08-29 17:50:19.638439 | orchestrator | │ 100 │ │ for item in definitions["reference"]: │ 2025-08-29 17:50:19.638469 | orchestrator | │ │ 2025-08-29 17:50:19.638505 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-08-29 17:50:19.638519 | orchestrator | │ │ cloud = │ │ 2025-08-29 17:50:19.638548 | orchestrator | │ │ definitions = { │ │ 2025-08-29 17:50:19.638559 | orchestrator | │ │ │ 'reference': [ │ │ 2025-08-29 17:50:19.638569 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-08-29 17:50:19.638580 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-08-29 17:50:19.638590 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-08-29 17:50:19.638601 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-08-29 17:50:19.638612 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-08-29 17:50:19.638622 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-08-29 17:50:19.638633 | orchestrator | │ │ │ ], │ │ 2025-08-29 17:50:19.638643 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-08-29 17:50:19.638654 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.638671 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-08-29 17:50:19.676414 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 17:50:19.676498 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-08-29 17:50:19.676514 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 17:50:19.676526 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-08-29 17:50:19.676536 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.676547 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-08-29 17:50:19.676560 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-08-29 17:50:19.676570 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.676581 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.676592 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.676602 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-08-29 17:50:19.676613 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 17:50:19.676624 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-08-29 17:50:19.676634 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-08-29 17:50:19.676645 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-08-29 17:50:19.676656 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.676666 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-08-29 17:50:19.676676 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-08-29 17:50:19.676687 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.676697 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.676764 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.676776 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-08-29 17:50:19.676787 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 17:50:19.676798 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-08-29 17:50:19.676809 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 17:50:19.676820 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 17:50:19.676830 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.676841 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-08-29 17:50:19.676852 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-08-29 17:50:19.676862 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.676895 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.676907 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.676918 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-08-29 17:50:19.676929 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 17:50:19.676939 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-08-29 17:50:19.676950 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-08-29 17:50:19.676961 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 17:50:19.676971 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.676982 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-08-29 17:50:19.676993 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-08-29 17:50:19.677003 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.677014 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.677025 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.677049 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-08-29 17:50:19.677061 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 17:50:19.677072 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 17:50:19.677083 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 17:50:19.677093 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 17:50:19.677104 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.677115 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-08-29 17:50:19.677126 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-08-29 17:50:19.677136 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.677147 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.677165 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.677176 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-08-29 17:50:19.677186 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 17:50:19.677197 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 17:50:19.677208 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-08-29 17:50:19.677219 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 17:50:19.677229 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.677240 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-08-29 17:50:19.677251 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-08-29 17:50:19.677261 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.677272 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.677283 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.677293 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-08-29 17:50:19.677304 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 17:50:19.677315 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-08-29 17:50:19.677325 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 17:50:19.677336 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 17:50:19.677347 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.677360 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-08-29 17:50:19.677371 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-08-29 17:50:19.677382 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.677392 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.677403 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.677414 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-08-29 17:50:19.677425 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 17:50:19.677436 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-08-29 17:50:19.677494 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-08-29 17:50:19.677508 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 17:50:19.677519 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.677530 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-08-29 17:50:19.677541 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-08-29 17:50:19.677551 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.677562 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.677586 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.794825 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-08-29 17:50:19.794918 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-08-29 17:50:19.794933 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 17:50:19.794945 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 17:50:19.794956 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 17:50:19.794967 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.794978 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-08-29 17:50:19.794989 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-08-29 17:50:19.794999 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.795010 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.795021 | orchestrator | │ │ │ │ { │ │ 2025-08-29 17:50:19.795031 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-08-29 17:50:19.795042 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-08-29 17:50:19.795053 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 17:50:19.795064 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-08-29 17:50:19.795075 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 17:50:19.795085 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 17:50:19.795096 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-08-29 17:50:19.795107 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-08-29 17:50:19.795118 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 17:50:19.795128 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 17:50:19.795139 | orchestrator | │ │ │ │ ... +19 │ │ 2025-08-29 17:50:19.795150 | orchestrator | │ │ │ ] │ │ 2025-08-29 17:50:19.795161 | orchestrator | │ │ } │ │ 2025-08-29 17:50:19.795172 | orchestrator | │ │ recommended = True │ │ 2025-08-29 17:50:19.795183 | orchestrator | │ │ self = │ │ 2025-08-29 17:50:19.795205 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-08-29 17:50:19.795219 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-08-29 17:50:19.795231 | orchestrator | KeyError: 'recommended' 2025-08-29 17:50:20.678492 | orchestrator | ERROR 2025-08-29 17:50:20.679163 | orchestrator | { 2025-08-29 17:50:20.679288 | orchestrator | "delta": "0:00:10.838262", 2025-08-29 17:50:20.679359 | orchestrator | "end": "2025-08-29 17:50:20.331954", 2025-08-29 17:50:20.679418 | orchestrator | "msg": "non-zero return code", 2025-08-29 17:50:20.679473 | orchestrator | "rc": 1, 2025-08-29 17:50:20.679568 | orchestrator | "start": "2025-08-29 17:50:09.493692" 2025-08-29 17:50:20.679628 | orchestrator | } failure 2025-08-29 17:50:20.696097 | 2025-08-29 17:50:20.696207 | PLAY RECAP 2025-08-29 17:50:20.696273 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-08-29 17:50:20.696304 | 2025-08-29 17:50:20.931998 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-08-29 17:50:20.933119 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-08-29 17:50:21.857341 | 2025-08-29 17:50:21.857515 | PLAY [Post output play] 2025-08-29 17:50:21.873451 | 2025-08-29 17:50:21.873629 | LOOP [stage-output : Register sources] 2025-08-29 17:50:21.936266 | 2025-08-29 17:50:21.936590 | TASK [stage-output : Check sudo] 2025-08-29 17:50:22.795170 | orchestrator | sudo: a password is required 2025-08-29 17:50:22.972888 | orchestrator | ok: Runtime: 0:00:00.014282 2025-08-29 17:50:22.985482 | 2025-08-29 17:50:22.985699 | LOOP [stage-output : Set source and destination for files and folders] 2025-08-29 17:50:23.034257 | 2025-08-29 17:50:23.034643 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-08-29 17:50:23.113685 | orchestrator | ok 2025-08-29 17:50:23.126665 | 2025-08-29 17:50:23.126973 | LOOP [stage-output : Ensure target folders exist] 2025-08-29 17:50:23.600997 | orchestrator | ok: "docs" 2025-08-29 17:50:23.601347 | 2025-08-29 17:50:23.868092 | orchestrator | ok: "artifacts" 2025-08-29 17:50:24.145446 | orchestrator | ok: "logs" 2025-08-29 17:50:24.170140 | 2025-08-29 17:50:24.170364 | LOOP [stage-output : Copy files and folders to staging folder] 2025-08-29 17:50:24.205450 | 2025-08-29 17:50:24.205732 | TASK [stage-output : Make all log files readable] 2025-08-29 17:50:24.496377 | orchestrator | ok 2025-08-29 17:50:24.504845 | 2025-08-29 17:50:24.504984 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-08-29 17:50:24.550504 | orchestrator | skipping: Conditional result was False 2025-08-29 17:50:24.567950 | 2025-08-29 17:50:24.568128 | TASK [stage-output : Discover log files for compression] 2025-08-29 17:50:24.596647 | orchestrator | skipping: Conditional result was False 2025-08-29 17:50:24.616416 | 2025-08-29 17:50:24.616682 | LOOP [stage-output : Archive everything from logs] 2025-08-29 17:50:24.668381 | 2025-08-29 17:50:24.668635 | PLAY [Post cleanup play] 2025-08-29 17:50:24.677931 | 2025-08-29 17:50:24.678051 | TASK [Set cloud fact (Zuul deployment)] 2025-08-29 17:50:24.761862 | orchestrator | ok 2025-08-29 17:50:24.774570 | 2025-08-29 17:50:24.774716 | TASK [Set cloud fact (local deployment)] 2025-08-29 17:50:24.810061 | orchestrator | skipping: Conditional result was False 2025-08-29 17:50:24.827574 | 2025-08-29 17:50:24.827757 | TASK [Clean the cloud environment] 2025-08-29 17:50:25.462126 | orchestrator | 2025-08-29 17:50:25 - clean up servers 2025-08-29 17:50:26.194157 | orchestrator | 2025-08-29 17:50:26 - testbed-manager 2025-08-29 17:50:26.276925 | orchestrator | 2025-08-29 17:50:26 - testbed-node-1 2025-08-29 17:50:26.362268 | orchestrator | 2025-08-29 17:50:26 - testbed-node-5 2025-08-29 17:50:26.450811 | orchestrator | 2025-08-29 17:50:26 - testbed-node-2 2025-08-29 17:50:26.553548 | orchestrator | 2025-08-29 17:50:26 - testbed-node-4 2025-08-29 17:50:26.650608 | orchestrator | 2025-08-29 17:50:26 - testbed-node-0 2025-08-29 17:50:26.741310 | orchestrator | 2025-08-29 17:50:26 - testbed-node-3 2025-08-29 17:50:26.839021 | orchestrator | 2025-08-29 17:50:26 - clean up keypairs 2025-08-29 17:50:26.856272 | orchestrator | 2025-08-29 17:50:26 - testbed 2025-08-29 17:50:26.881957 | orchestrator | 2025-08-29 17:50:26 - wait for servers to be gone 2025-08-29 17:50:37.727722 | orchestrator | 2025-08-29 17:50:37 - clean up ports 2025-08-29 17:50:37.928963 | orchestrator | 2025-08-29 17:50:37 - 30143de4-94e9-49ad-ae4b-7b3450f3e124 2025-08-29 17:50:38.191001 | orchestrator | 2025-08-29 17:50:38 - 35102cfe-6915-4914-af33-3ce361112032 2025-08-29 17:50:38.689647 | orchestrator | 2025-08-29 17:50:38 - 366ac89f-c0fc-403d-a506-57ce2a2bbb8a 2025-08-29 17:50:38.913257 | orchestrator | 2025-08-29 17:50:38 - 5c5caba2-67c9-40f9-9533-9f1a1c695d84 2025-08-29 17:50:39.165985 | orchestrator | 2025-08-29 17:50:39 - 5dda665c-18ce-41a4-9df1-8032986445a0 2025-08-29 17:50:39.368833 | orchestrator | 2025-08-29 17:50:39 - 5dee14bd-fbaa-4a4e-8629-a3788821ead7 2025-08-29 17:50:39.599252 | orchestrator | 2025-08-29 17:50:39 - c43d0254-a98c-408f-90a2-2323b2b73318 2025-08-29 17:50:39.804570 | orchestrator | 2025-08-29 17:50:39 - clean up volumes 2025-08-29 17:50:39.943574 | orchestrator | 2025-08-29 17:50:39 - testbed-volume-4-node-base 2025-08-29 17:50:39.992490 | orchestrator | 2025-08-29 17:50:39 - testbed-volume-1-node-base 2025-08-29 17:50:40.034424 | orchestrator | 2025-08-29 17:50:40 - testbed-volume-manager-base 2025-08-29 17:50:40.076809 | orchestrator | 2025-08-29 17:50:40 - testbed-volume-2-node-base 2025-08-29 17:50:40.119729 | orchestrator | 2025-08-29 17:50:40 - testbed-volume-0-node-base 2025-08-29 17:50:40.160493 | orchestrator | 2025-08-29 17:50:40 - testbed-volume-3-node-base 2025-08-29 17:50:40.202612 | orchestrator | 2025-08-29 17:50:40 - testbed-volume-5-node-base 2025-08-29 17:50:40.248330 | orchestrator | 2025-08-29 17:50:40 - testbed-volume-1-node-4 2025-08-29 17:50:40.430387 | orchestrator | 2025-08-29 17:50:40 - testbed-volume-2-node-5 2025-08-29 17:50:40.475023 | orchestrator | 2025-08-29 17:50:40 - testbed-volume-7-node-4 2025-08-29 17:50:40.519575 | orchestrator | 2025-08-29 17:50:40 - testbed-volume-8-node-5 2025-08-29 17:50:40.561732 | orchestrator | 2025-08-29 17:50:40 - testbed-volume-6-node-3 2025-08-29 17:50:40.603176 | orchestrator | 2025-08-29 17:50:40 - testbed-volume-0-node-3 2025-08-29 17:50:40.644635 | orchestrator | 2025-08-29 17:50:40 - testbed-volume-4-node-4 2025-08-29 17:50:40.686253 | orchestrator | 2025-08-29 17:50:40 - testbed-volume-5-node-5 2025-08-29 17:50:40.729465 | orchestrator | 2025-08-29 17:50:40 - testbed-volume-3-node-3 2025-08-29 17:50:40.772000 | orchestrator | 2025-08-29 17:50:40 - disconnect routers 2025-08-29 17:50:40.881971 | orchestrator | 2025-08-29 17:50:40 - testbed 2025-08-29 17:50:42.297743 | orchestrator | 2025-08-29 17:50:42 - clean up subnets 2025-08-29 17:50:42.353498 | orchestrator | 2025-08-29 17:50:42 - subnet-testbed-management 2025-08-29 17:50:42.493936 | orchestrator | 2025-08-29 17:50:42 - clean up networks 2025-08-29 17:50:42.670542 | orchestrator | 2025-08-29 17:50:42 - net-testbed-management 2025-08-29 17:50:42.945609 | orchestrator | 2025-08-29 17:50:42 - clean up security groups 2025-08-29 17:50:42.984531 | orchestrator | 2025-08-29 17:50:42 - testbed-node 2025-08-29 17:50:43.093654 | orchestrator | 2025-08-29 17:50:43 - testbed-management 2025-08-29 17:50:43.218960 | orchestrator | 2025-08-29 17:50:43 - clean up floating ips 2025-08-29 17:50:43.262347 | orchestrator | 2025-08-29 17:50:43 - 81.163.192.184 2025-08-29 17:50:43.630244 | orchestrator | 2025-08-29 17:50:43 - clean up routers 2025-08-29 17:50:43.734325 | orchestrator | 2025-08-29 17:50:43 - testbed 2025-08-29 17:50:44.886276 | orchestrator | ok: Runtime: 0:00:19.403879 2025-08-29 17:50:44.890924 | 2025-08-29 17:50:44.891099 | PLAY RECAP 2025-08-29 17:50:44.891233 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-08-29 17:50:44.891298 | 2025-08-29 17:50:45.036000 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-08-29 17:50:45.037071 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-08-29 17:50:45.799396 | 2025-08-29 17:50:45.799633 | PLAY [Cleanup play] 2025-08-29 17:50:45.817394 | 2025-08-29 17:50:45.817602 | TASK [Set cloud fact (Zuul deployment)] 2025-08-29 17:50:45.873639 | orchestrator | ok 2025-08-29 17:50:45.882959 | 2025-08-29 17:50:45.883116 | TASK [Set cloud fact (local deployment)] 2025-08-29 17:50:45.907883 | orchestrator | skipping: Conditional result was False 2025-08-29 17:50:45.923997 | 2025-08-29 17:50:45.924135 | TASK [Clean the cloud environment] 2025-08-29 17:50:47.047600 | orchestrator | 2025-08-29 17:50:47 - clean up servers 2025-08-29 17:50:47.513123 | orchestrator | 2025-08-29 17:50:47 - clean up keypairs 2025-08-29 17:50:47.529430 | orchestrator | 2025-08-29 17:50:47 - wait for servers to be gone 2025-08-29 17:50:47.571853 | orchestrator | 2025-08-29 17:50:47 - clean up ports 2025-08-29 17:50:47.664921 | orchestrator | 2025-08-29 17:50:47 - clean up volumes 2025-08-29 17:50:47.727153 | orchestrator | 2025-08-29 17:50:47 - disconnect routers 2025-08-29 17:50:47.756504 | orchestrator | 2025-08-29 17:50:47 - clean up subnets 2025-08-29 17:50:47.776001 | orchestrator | 2025-08-29 17:50:47 - clean up networks 2025-08-29 17:50:47.934649 | orchestrator | 2025-08-29 17:50:47 - clean up security groups 2025-08-29 17:50:47.968079 | orchestrator | 2025-08-29 17:50:47 - clean up floating ips 2025-08-29 17:50:47.991496 | orchestrator | 2025-08-29 17:50:47 - clean up routers 2025-08-29 17:50:48.463212 | orchestrator | ok: Runtime: 0:00:01.355564 2025-08-29 17:50:48.467039 | 2025-08-29 17:50:48.467201 | PLAY RECAP 2025-08-29 17:50:48.467332 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-08-29 17:50:48.467403 | 2025-08-29 17:50:48.597072 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-08-29 17:50:48.598107 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-08-29 17:50:49.337825 | 2025-08-29 17:50:49.337989 | PLAY [Base post-fetch] 2025-08-29 17:50:49.354069 | 2025-08-29 17:50:49.354198 | TASK [fetch-output : Set log path for multiple nodes] 2025-08-29 17:50:49.420477 | orchestrator | skipping: Conditional result was False 2025-08-29 17:50:49.435296 | 2025-08-29 17:50:49.435490 | TASK [fetch-output : Set log path for single node] 2025-08-29 17:50:49.484287 | orchestrator | ok 2025-08-29 17:50:49.493309 | 2025-08-29 17:50:49.493452 | LOOP [fetch-output : Ensure local output dirs] 2025-08-29 17:50:50.009499 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/1c639c5fc7074b538a16b20865708ec8/work/logs" 2025-08-29 17:50:50.280768 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1c639c5fc7074b538a16b20865708ec8/work/artifacts" 2025-08-29 17:50:50.547660 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1c639c5fc7074b538a16b20865708ec8/work/docs" 2025-08-29 17:50:50.561466 | 2025-08-29 17:50:50.561653 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-08-29 17:50:51.598668 | orchestrator | changed: .d..t...... ./ 2025-08-29 17:50:51.599053 | orchestrator | changed: All items complete 2025-08-29 17:50:51.599102 | 2025-08-29 17:50:52.352264 | orchestrator | changed: .d..t...... ./ 2025-08-29 17:50:53.087721 | orchestrator | changed: .d..t...... ./ 2025-08-29 17:50:53.120686 | 2025-08-29 17:50:53.120860 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-08-29 17:50:53.160030 | orchestrator | skipping: Conditional result was False 2025-08-29 17:50:53.163630 | orchestrator | skipping: Conditional result was False 2025-08-29 17:50:53.184442 | 2025-08-29 17:50:53.184611 | PLAY RECAP 2025-08-29 17:50:53.184698 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-08-29 17:50:53.184743 | 2025-08-29 17:50:53.320038 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-08-29 17:50:53.321117 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-08-29 17:50:54.071250 | 2025-08-29 17:50:54.071455 | PLAY [Base post] 2025-08-29 17:50:54.087039 | 2025-08-29 17:50:54.087198 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-08-29 17:50:55.082690 | orchestrator | changed 2025-08-29 17:50:55.093449 | 2025-08-29 17:50:55.093596 | PLAY RECAP 2025-08-29 17:50:55.093677 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-08-29 17:50:55.093757 | 2025-08-29 17:50:55.217007 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-08-29 17:50:55.219405 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-08-29 17:50:56.078371 | 2025-08-29 17:50:56.078629 | PLAY [Base post-logs] 2025-08-29 17:50:56.090429 | 2025-08-29 17:50:56.090625 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-08-29 17:50:56.567174 | localhost | changed 2025-08-29 17:50:56.584934 | 2025-08-29 17:50:56.585115 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-08-29 17:50:56.624206 | localhost | ok 2025-08-29 17:50:56.629420 | 2025-08-29 17:50:56.629618 | TASK [Set zuul-log-path fact] 2025-08-29 17:50:56.658103 | localhost | ok 2025-08-29 17:50:56.673816 | 2025-08-29 17:50:56.673992 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-08-29 17:50:56.713907 | localhost | ok 2025-08-29 17:50:56.719969 | 2025-08-29 17:50:56.720124 | TASK [upload-logs : Create log directories] 2025-08-29 17:50:57.247279 | localhost | changed 2025-08-29 17:50:57.250371 | 2025-08-29 17:50:57.250481 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-08-29 17:50:57.771670 | localhost -> localhost | ok: Runtime: 0:00:00.007832 2025-08-29 17:50:57.777081 | 2025-08-29 17:50:57.777255 | TASK [upload-logs : Upload logs to log server] 2025-08-29 17:50:58.372365 | localhost | Output suppressed because no_log was given 2025-08-29 17:50:58.374958 | 2025-08-29 17:50:58.375092 | LOOP [upload-logs : Compress console log and json output] 2025-08-29 17:50:58.426711 | localhost | skipping: Conditional result was False 2025-08-29 17:50:58.431267 | localhost | skipping: Conditional result was False 2025-08-29 17:50:58.444944 | 2025-08-29 17:50:58.445116 | LOOP [upload-logs : Upload compressed console log and json output] 2025-08-29 17:50:58.508032 | localhost | skipping: Conditional result was False 2025-08-29 17:50:58.508750 | 2025-08-29 17:50:58.512378 | localhost | skipping: Conditional result was False 2025-08-29 17:50:58.526560 | 2025-08-29 17:50:58.526881 | LOOP [upload-logs : Upload console log and json output]